text
stringlengths
62
2.94k
Verifiable Member and Order Queries on a List in ZeroKnowledge ; We introduce a formal model for order queries on lists in zero knowledge in the traditional authenticated data structure model. We call this model PrivacyPreserving Authenticated List PPAL. In this model, the queries are performed on the list stored in the untrusted cloud where data integrity and privacy have to be maintained. To realize an efficient authenticated data structure, we first adapt consistent data query model. To this end we introduce a formal model called ZeroKnowledge List ZKL scheme which generalizes consistent membership queries in zeroknowledge to consistent membership and order queries on a totally ordered set in zero knowledge. We present a construction of ZKL based on zeroknowledge set and homomorphic integer commitment scheme. Then we discuss why this construction is not as efficient as desired in cloud applications and present an efficient construction of PPAL based on bilinear accumulators and bilinear maps which is provably secure and zeroknowledge.
Indexing Cost Sensitive Prediction ; Predictive models are often used for realtime decision making. However, typical machine learning techniques ignore feature evaluation cost, and focus solely on the accuracy of the machine learning models obtained utilizing all the features available. We develop algorithms and indexes to support costsensitive prediction, i.e., making decisions using machine learning models taking feature evaluation cost into account. Given an item and a online computation cost i.e., time budget, we present two approaches to return an appropriately chosen machine learning model that will run within the specified time on the given item. The first approach returns the optimal machine learning model, i.e., one with the highest accuracy, that runs within the specified time, but requires significant upfront precomputation time. The second approach returns a possibly sub optimal machine learning model, but requires little upfront precomputation time. We study these two algorithms in detail and characterize the scenarios using real and synthetic data in which each performs well. Unlike prior work that focuses on a narrow domain or a specific algorithm, our techniques are very general they apply to any costsensitive prediction scenario on any machine learning algorithm.
Cosmology with Nilpotent Superfields ; We discuss N1 supergravity inflationary models based on two chiral multiplets, the inflaton and the goldstino superfield. Using superconformal methods for these models, we propose to replace the unconstrained chiral goldstino multiplet by the nilpotent one associated with nonlinearly realized supersymmetry of the VolkovAkulov type. In the new cosmological models, the sgoldstino is proportional to a bilinear combination of fermionic goldstinos. It does not acquire any vev, does nor require stabilization, and does not affect the cosmological evolution. We explain a universal relation of these new models to kappasymmetric superDpbrane actions. This modification significantly simplifies a broad class of the presently existing inflationary models based on supergravity and string theory, including the simplest versions of chaotic inflation, the Starobinsky model, a broad class of cosmological attractors, the Higgs inflation, and much more. In particular, this is a step towards a fully supersymmetric version of the string theory axion monodromy inflation. The new construction serves as a simple and manifestly supersymmetric uplifting tool in the KKLTtype string theory landscape.
A simple, distancedependent formulation of the WattsStrogatz model for directed and undirected smallworld networks ; Smallworld networkscomplex networks characterized by a combination of high clustering and short path lengthsare widely studied using the paradigmatic model of Watts and Strogatz WS. Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distancedependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WStype smallworld networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.
A comparison of model view controller and model view presenter ; Web application frameworks are managed by using different design strategies. Design strategies are applied by using different design processes. In each design process, requirement specifications are changed in to different design model that describe the detail of different data structure, system architecture, interface and components. Web application frame work is implemented by using Model View Controller MVC and Model View Presenter MVP. These web application models are used to provide standardized view for web applications. This paper mainly focuses on different design aspect of MVC and MVP. Generally we present different methodologies that are related to the implementation of MVC and MVP and implementation of appropriate platform and suitable environment for MVC and MVP.
Conceptual and mathematical modeling of insectborne plant diseases theory and application to flavescence doree in grapevine ; Insectborne plant diseases recur commonly in wild plants and in agricultural crops, and are responsible for severe losses in terms of produce yield and monetary return. Mathematical models of insectborne plant diseases are therefore an essential tool to help predicting the progression of an epidemic disease and aid in decision making when control strategies are to be implemented in the field. While retaining a generalized applicability of the proposed model to plant epidemics vectored by insects, we specifically investigated the epidemics of Flavescence dor'ee phytoplasma FD in grapevine plant Vitis vinifera specifically transmitted by the leafhopper Scaphoideus titanus. The epidemiological model accounted for lifecycle stage of S. titanus, FD pathogen cycle within S. titanus and V. vinifera, vineyard setting, and agronomic practices. The model was comprehensively tested against biological S. titanus life cycle and FD epidemics data collected in various research sites in Piemonte, Italy, over multiple years. The work presented here represents a unique suite of governing equations tested on existing independent data and sets the basis for further modelling advances and possible applications to investigate effectiveness of realcase epidemics control strategies and scenarios.
Augmented Neural Networks for Modelling Consumer Indebtness ; Consumer Debt has risen to be an important problem of modern societies, generating a lot of research in order to understand the nature of consumer indebtness, which so far its modelling has been carried out by statistical models. In this work we show that Computational Intelligence can offer a more holistic approach that is more suitable for the complex relationships an indebtness dataset has and Linear Regression cannot uncover. In particular, as our results show, Neural Networks achieve the best performance in modelling consumer indebtness, especially when they manage to incorporate the significant and experimentally verified results of the Data Mining process in the model, exploiting the flexibility Neural Networks offer in designing their topology. This novel method forms an elaborate framework to model Consumer indebtness that can be extended to any other real world application.
Variability Modeling for Customizable SaaS Applications ; Most of current SoftwareasaService SaaS applications are developed as customizable serviceoriented applications that serve a large number of tenants users by one application instance. The current rapid evolution of SaaS applications increases the demand to study the commonality and variability in software product lines that produce customizable SaaS applications. During runtime, Customizability is required to achieve different tenants' requirements. During the development process, defining and realizing commonalty and variability in SaaS applications' families is required to develop reusable, flexible, and customizable SaaS applications at lower costs, in shorter time, and with higher quality. In this paper, Orthogonal Variability Model OVM is used to model variability in a separated model, which is used to generate simple and understandable customization model. Additionally, Service oriented architecture Modeling Language SoaML is extended to define and realize commonalty and variability during the development of SaaS applications.
Towards a Familybased Analysis of Applicability Conditions in Architectural Delta Models ; Modeling variability in software architectures is a fundamental part of software product line development. MontiArc allows describing architectural variability in a modular way by a designated core architecture and a set of architectural delta models modifying the core architecture to realize other architecture variants. Delta models have to satisfy a set of applicability conditions for the definedness of the architectural variants. The applicability conditions can in principle be checked by generating all possible architecture variants, which requires considering the same intermediate architectures repeatedly. In order to reuse previously computed architecture variants, we propose a familybased analysis of the applicability conditions using the concept of inverse deltas.
Universal Quantum Computation by Scattering in the FermiHubbard Model ; The Hubbard model may be the simplest model of particles interacting on a lattice, but simulation of its dynamics remains beyond the reach of current numerical methods. In this article, we show that general quantum computations can be encoded into the physics of wave packets propagating through a planar graph, with scattering interactions governed by the fermionic Hubbard model. Therefore, simulating the model on planar graphs is as hard as simulating quantum computation. We give two different arguments, demonstrating that the simulation is difficult both for wave packets prepared as excitations of the fermionic vacuum, and for hole wave packets at filling fraction onehalf in the limit of strong coupling. In the latter case, which is described by the tJ model, there is only reflection and no transmission in the scattering events, as would be the case for classical hard spheres. In that sense, the construction provides a quantum mechanical analog of the FredkinToffoli billiard ball computer.
Solution of the equations of motion for a super nonAbelian sigma model in curved background by the super PoissonLie Tduality ; The equations of motion of a super nonAbelian Tdual sigma model on the Lie supergroup C11A in the curved background are explicitly solved by the super PoissonLie Tduality. To find the solution of the flat model we use the transformation of supercoordinates, transforming the metric into a constant one, which is shown to be a supercanonical transformation. Then, using the super PoissonLie Tduality transformations and the dual decomposition of elements of Drinfel'd superdouble, the solution of the equations of motion for the dual sigma model is obtained. The general form of the dilaton fields satisfying the vanishing betafunction equations of the sigma models is found. In this respect, conformal invariance of the sigma models built on the Drinfel'd superdouble C11A,I22 is guaranteed up to oneloop, at least.
Incremental Bounded Model Checking for Embedded Software extended version ; Program analysis is on the brink of mainstream in embedded systems development. Formal verification of behavioural requirements, finding runtime errors and automated test case generation are some of the most common applications of automated verification tools based on Bounded Model Checking. Existing industrial tools for embedded software use an offtheshelf Bounded Model Checker and apply it iteratively to verify the program with an increasing number of unwindings. This approach unnecessarily wastes time repeating work that has already been done and fails to exploit the power of incremental SAT solving. This paper reports on the extension of the software model checker CBMC to support incremental Bounded Model Checking and its successful integration with the industrial embedded software verification tool BTC EmbeddedTester. We present an extensive evaluation over large industrial embedded programs, which shows that incremental Bounded Model Checking cuts runtimes by one order of magnitude in comparison to the standard nonincremental approach, enabling the application of formal verification to large and complex embedded software.
The Immediate Exchange model an analytical investigation ; We study the Immediate Exchange model, recently introduced by Heinsalu and Patriarca Eur. Phys. J. B 87 170 2014, who showed by simulations that the wealth distribution in this model converges to a Gamma distribution with shape parameter 2. Here we justify this conclusion analytically, in the infinitepopulation limit. An infinitepopulation version of the model is derived, describing the evolution of the wealth distribution in terms of iterations of a nonlinear operator on the space of probability densities. It is proved that the Gamma distributions with shape parameter 2 are fixed points of this operator, and that, starting with an arbitrary wealth distribution, the process converges to one of these fixed points. We also discuss the mixed model introduced in the same paper, in which exchanges are either bidirectional or unidirectional with fixed probability. We prove that, although, as found by Heinsalu and Patriarca, the equilibrium distribution can be closely fit by Gamma distributions, the equilibrium distribution for this model is itnot a Gamma distribution.
Gaussian Tree Constraints Applied to Acoustic Linguistic Functional Data ; Evolutionary models of languages are usually considered to take the form of trees. With the development of socalled tree constraints the plausibility of the tree model assumptions can be addressed by checking whether the moments of observed variables lie within regions consistent with trees. In our linguistic application, the data set comprises acoustic samples audio recordings from speakers of five Romance languages or dialects. We wish to assess these functional data for compatibility with a hereditary tree model at the language level. A novel combination of canonical function analysis CFA with a separable covariance structure provides a method for generating a representative basis for the data. This resulting basis is formed of components which emphasize language differences whilst maintaining the integrity of the observational languagegroupings. A previously unexploited Gaussian tree constraint is then applied to componentbycomponent projections of the data to investigate adherence to an evolutionary tree. The results indicate that while a tree model is unlikely to be suitable for modeling all aspects of the acoustic linguistic data, certain features of the spoken Romance languages highlighted by the separableCFA basis may indeed be suitably modeled as a tree.
Patterns in the English Language Phonological Networks, Percolation and Assembly Models ; In this paper we provide a quantitative framework for the study of phonological networks PNs for the English language by carrying out principled comparisons to null models, either based on site percolation, randomization techniques, or network growth models. In contrast to previous work, we mainly focus on null models that reproduce lower order characteristics of the empirical data. We find that artificial networks matching connectivity properties of the English PN are exceedingly rare this leads to the hypothesis that the word repertoire might have been assembled over time by preferentially introducing new words which are small modifications of old words. Our null models are able to explain the powerlawlike part of the degree distributions and generally retrieve qualitative features of the PN such as high clustering, high assortativity coefficient, and smallworld characteristics. However, the detailed comparison to expectations from null models also points out significant differences, suggesting the presence of additional constraints in word assembly. Key constraints we identify are the avoidance of large degrees, the avoidance of triadic closure, and the avoidance of large nonpercolating clusters.
Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes ; In this paper we propose the first nonparametric Bayesian model using Gaussian Processes to make inference on Poisson Point Processes without resorting to gridding the domain or to introducing latent thinning points. Unlike competing models that scale cubically and have a squared memory requirement in the number of data points, our model has a linear complexity and memory requirement. We propose an MCMC sampler and show that our model is faster, more accurate and generates less correlated samples than competing models on both synthetic and reallife data. Finally, we show that our model easily handles data sizes not considered thus far by alternate approaches.
Onthefly Probabilistic Model Checking ; Model checking approaches can be divided into two broad categories global approaches that determine the set of all states in a model M that satisfy a temporal logic formula f, and local approaches in which, given a state s in M, the procedure determines whether s satisfies f. When s is a term of a process language, the model checking procedure can be executed onthefly, driven by the syntactical structure of s. For certain classes of systems, e.g. those composed of many parallel components, the local approach is preferable because, depending on the specific property, it may be sufficient to generate and inspect only a relatively small part of the state space. We propose an efficient, onthefly, PCTL model checking procedure that is parametric with respect to the semantic interpretation of the language. The procedure comprises both bounded and unbounded until modalities. The correctness of the procedure is shown and its efficiency is compared with a global PCTL model checker on representative applications.
Exact lowtemperature series expansion for the partition function of the twodimensional zerofield s12 Ising model on the infinite square lattice ; In this paper, we provide the exact expression for the coefficients in the lowtemperature series expansion of the partition function of the twodimensional Ising model on the infinite square lattice. This is equivalent to exact determination of the number of spin configurations at a given energy. With these coefficients, we show that the ferromagnetictoparamagnetic phase transition in the square lattice Ising model can be explained through equivalence between the model and the perfect gas of energy clusters model, in which the passage through the critical point is related to the complete change in the thermodynamic preferences on the size of clusters. The combinatorial approach reported in this article is very general and can be easily applied to other lattice models.
Baryonic torii Toroidal baryons in a generalized Skyrme model ; We study a Skyrmetype model with a potential term motivated by BoseEinstein condensates BECs, which we call the BEC Skyrme model. We consider two flavors of the model, the first is the Skyrme model and the second has a sixthorder derivative term instead of the Skyrme term; both with the added BECmotivated potential. The model contains toroidally shaped Skyrmions and they are characterized by two integers P and Q, representing the winding numbers of two complex scalar fields along the toroidal and poloidal cycles of the torus, respectively. The baryon number is BPQ. We find stable Skyrmion solutions for P1,2,3,4,5 with Q1, while for P6 and Q1 it is only metastable. We further find that configurations with higher Q1 are all unstable and split into Q configurations with Q1. Finally we discover a phase transition, possibly of first order, in the mass parameter of the potential under study.
A Hybrid Recurrent Neural Network For Music Transcription ; We investigate the problem of incorporating higherlevel symbolic scorelike information into Automatic Music Transcription AMT systems to improve their performance. We use recurrent neural networks RNNs and their variants as music language models MLMs and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.
Correction terms for propagators and d'Alembertians due to spacetime discreteness ; The causal set approach to quantum gravity models spacetime as a discrete structure a causal set. Recent research has led to causal set models for the retarded propagator for the KleinGordon equation and the d'Alembertian operator. These models can be compared to their continuum counterparts via a sprinkling process. It has been shown that the models agree exactly with the continuum quantities in the limit of an infinite sprinkling density the continuum limit. This paper obtains the correction terms for these models for sprinkled causal sets with a finite sprinkling density. These correction terms are an important step towards testable differences between the continuum and discrete models that could provide evidence of spacetime discreteness.
WhackAMole Model Towards unified description of biological effect caused by radiationexposure ; We present a novel model to estimate biological effects caused by artificial radiation exposure, Whackamole WAM model. It is important to take account of the recovery effects during the time course of the cellular reactions. The inclusion of the doserate dependence is essential in the risk estimation of low dose radiation, while nearly all the existing theoretical models relies on the total dose dependence only. By analyzing the experimental data of the relation between the radiation dose and the induced mutation frequency of 5 organisms, mouse, drosophila, chrysanthemum, maize and tradescantia, we found that all the data can be reproduced by WAM model. Most remarkably, a scaling function, which is derived from WAM model, consistently accounts for the observed mutation frequencies of 5 organisms. This is the first rationale to account for the dose rate dependence as well as to give a unified understanding of a general feature of organisms.
A Frequentist Approach to Computer Model Calibration ; This paper considers the computer model calibration problem and provides a general frequentist solution. Under the proposed framework, the data model is semiparametric with a nonparametric discrepancy function which accounts for any discrepancy between the physical reality and the computer model. In an attempt to solve a fundamentally important but often ignored identifiability issue between the computer model parameters and the discrepancy function, this paper proposes a new and identifiable parametrization of the calibration problem. It also develops a twostep procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates of convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. The practical performance of the proposed methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.
Graphical Modeling of Spatial Health Data ; The literature on Gaussian graphical models GGMs contains two equally rich and equally significant domains of research efforts and interests. The first research domain relates to the problem of graph determination. That is, the underlying graph is unknown and needs to be inferred from the data. The second research domain dominates the applications in spatial epidemiology. In this context GGMs are typically referred to as Gaussian Markov random fields GMRFs. Here the underlying graph is assumed to be known the vertices correspond to geographical areas, while the edges are associated with areas that are considered to be neighbors of each other e.g., if they share a border. We introduce multiway Gaussian graphical models that unify the statistical approaches to inference for spatiotemporal epidemiology with the literature on general GGMs. The novelty of the proposed work consists of the addition of the GWishart distribution to the substantial collection of statistical tools used to model multivariate areal data. As opposed to fixed graphs that describe geography, there is an inherent uncertainty related to graph determination across the other dimensions of the data. Our new class of methods for spatial epidemiology allow the simultaneous use of GGMs to represent known spatial dependencies and to determine unknown dependencies in the other dimensions of the data. KEYWORDS Gaussian graphical models, Gaussian Markov random fields, spatiotemporal multivariate models
Peridynamics and Material Interfaces ; The convergence of a peridynamic model for solid mechanics inside heterogeneous media in the limit of vanishing nonlocality is analyzed. It is shown that the operator of linear peridynamics for an isotropic heterogeneous medium converges to the corresponding operator of linear elasticity when the material properties are sufficiently regular. On the other hand, when the material properties are discontinuous, i.e., when material interfaces are present, it is shown that the operator of linear peridynamics diverges, in the limit of vanishing nonlocality, at material interfaces. Nonlocal interface conditions, whose local limit implies the classical interface conditions of elasticity, are then developed and discussed. A peridynamics material interface model is introduced which generalizes the classical interface model of elasticity. The model consists of a new peridynamics operator along with nonlocal interface conditions. The new peridynamics interface model converges to the classical interface model of linear elasticity.
Multilinear tensor regression for longitudinal relational data ; A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kroneckerstructured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity.
Constraining a scalar field dark energy with variable equation of state for matter ; The redshift zeq, marking the end of radiation era and the beginning of matterdominated era, can play an important role to reconstruct darkenergy models. A variable equation of state for matter that can bring a smooth transition from radiation to matterdominated era in a single model is proposed to estimate zeq in dark energy models and hence its viability. Two oneparameter models with minimally coupled scalar fields playing the role of dark energy are chosen to demonstrate this point. It is found that for desired late time behavior of the models, the estimated value of zeq is highly sensitive on the value of the parameter in each of these models.
Streaming Variational Inference for Bayesian Nonparametric Mixture Models ; In theory, Bayesian nonparametric BNP models are well suited to streaming data scenarios due to their ability to adapt model complexity with the observed data. Unfortunately, such benefits have not been fully realized in practice; existing inference algorithms are either not applicable to streaming applications or not extensible to BNP models. For the special case of Dirichlet processes, streaming inference has been considered. However, there is growing interest in more flexible BNP models building on the class of normalized random measures NRMs. We work within this general framework and present a streaming variational inference algorithm for NRM mixture models. Our algorithm is based on assumed density filtering ADF, leading straightforwardly to expectation propagation EP for largescale batch inference as well. We demonstrate the efficacy of the algorithm on clustering documents in large, streaming text corpora.
Dwell Time Prediction Model for Minimizing Unnecessary Handovers in Heterogenous Wireless Networks, Considering Amoebic Shaped Coverage Region ; Over the years, vertical handover necessity estimation has attracted the interest of numerous researchers. Despite the attractive benefits of integrating different wireless platforms, mobile users are confronted with the issue of detrimental handover. This paper used extensive geometric and probability analysis in modelling the coverage area of a WLAN cell. Thus, presents a realistic and novel model with an attempt to minimize unnecessary handover and handover failure of a mobile node MN traversing the WLAN cell from a third generation 3G network. The dwell time is estimated along with the threshold values to ensure an optimal handover decision by the MN, while the probability of unnecessary handover and handover failure are kept within tolerable bounds. MonteCarlo simulations were carried out to show the behavior of the proposed model. Results were validated by comparing this model with existing models for unnecessary handover minimization.
A Canonical Representation of DataLinear Visualization Algorithms ; We introduce linearstate dataflows, a canonical model for a large set of visualization algorithms that we call datalinear visualizations. Our model defines a fixed dataflow architecture partitioning and subpartitioning of input data, ordering, graphic primitives, and graphic attributes generation. Local variables and accumulators are specific concepts that extend the expressiveness of the dataflow to support features of visualization algorithms that require state handling. We first show the flexibility of our model it enables the declarative construction of many common algorithms with just a few mappings. Furthermore, the model enables easy mixing of visual mappings, such as creating treemaps of histograms and 2D plots, plots of histograms... Finally, we introduce our model in a more formal way and present some of its important properties. We have implemented this model in a visualization framework built around the concept of linearstate dataflows.
Quenched central limit theorems for the Ising model on random graphs ; The main goal of the paper is to prove central limit theorems for the magnetization rescaled by sqrtN for the Ising model on random graphs with N vertices. Both random quenched and averaged quenched measures are considered. We work in the uniqueness regime betabetac or beta0 and Bneq0, where beta is the inverse temperature, betac is the critical inverse temperature and B is the external magnetic field. In the random quenched setting our results apply to general treelike random graphs as introduced by Dembo, Montanari and further studied by Dommers and the first and third author and our proof follows that of Ellis in mathbbZd. For the averaged quenched setting, we specialize to two particular random graph models, namely the 2regular configuration model and the configuration model with degrees 1 and 2. In these cases our proofs are based on explicit computations relying on the solution of the one dimensional Ising models.
The supervised hierarchical Dirichlet process ; We propose the supervised hierarchical Dirichlet process sHDP, a nonparametric generative model for the joint distribution of a group of observations and a response variable directly associated with that whole group. We compare the sHDP with another leading method for regression on grouped data, the supervised latent Dirichlet allocation sLDA model. We evaluate our method on two realworld classification problems and two realworld regression problems. Bayesian nonparametric regression models based on the Dirichlet process, such as the Dirichlet processgeneralised linear models DPGLM have previously been explored; these models allow flexibility in modelling nonlinear relationships. However, until now, Hierarchical Dirichlet Process HDP mixtures have not seen significant use in supervised problems with grouped data since a straightforward application of the HDP on the grouped data results in learnt clusters that are not predictive of the responses. The sHDP solves this problem by allowing for clusters to be learnt jointly from the group structure and from the label assigned to each group.
A comparative analysis of the UK and Italian small businesses using Generalised Extreme Value models ; This paper presents a crosscountry comparison of significant predictors of small business failure between Italy and the UK. Financial measures of profitability, leverage, coverage, liquidity, scale and nonfinancial information are explored, some commonalities and differences are highlighted. Several models are considered, starting with the logis tic regression which is a standard approach in credit risk modelling. Some important improvements are investigated. Generalised Extreme Value GEV regression is applied to correct for the symmetric link function of the logistic regression. The assumption of nonlinearity is relaxed through application of BGEVA, nonparametric additive model based on the GEV link function. Two methods of handling missing values are compared multiple imputation and Weights of Evidence WoE transformation. The results suggest that the best predictive performance is obtained by BGEVA, thus implying the necessity of taking into account the relative volume of defaults and nonlinear patterns when modelling SME performance. WoE for the majority of models considered show better prediction as compared to multiple imputation, suggesting that missing values could be informative and should not be assumed to be missing at random.
Nonlinear GARCH model and 1f noise ; Autoregressive conditionally heteroskedastic ARCH family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH1,1 process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH1,1 process has power law distribution, but its power spectral density is Brownian noiselike. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the power law form, including 1f noise.
Stochastic block model and exploratory analysis in signed networks ; We propose a generalized stochastic block model to explore the mesoscopic structures in signed networks by grouping vertices that exhibit similar positive and negative connection profiles into the same cluster. In this model, the group memberships are viewed as hidden or unobserved quantities, and the connection patterns between groups are explicitly characterized by two block matrices, one for positive links and the other for negative links. By fitting the model to the observed network, we can not only extract various structural patterns existing in the network without prior knowledge, but also recognize what specific structures we obtained. Furthermore, the model parameters provide vital clues about the probabilities that each vertex belongs to different groups and the centrality of each vertex in its corresponding group. This information sheds light on the discovery of the networks' overlapping structures and the identification of two types of important vertices, which serve as the cores of each group and the bridges between different groups, respectively. Experiments on a series of synthetic and reallife networks show the effectiveness as well as the superiority of our model.
A Composite Risk Measure Framework for Decision Making under Uncertainty ; In this paper, we present a unified framework for decision making under uncertainty. Our framework is based on the composite of two risk measures, where the inner risk measure accounts for the risk of decision given the exact distribution of uncertain model parameters, and the outer risk measure quantifies the risk that occurs when estimating the parameters of distribution. We show that the model is tractable under mild conditions. The framework is a generalization of several existing models, including stochastic programming, robust optimization, distributionally robust optimization, etc. Using this framework, we study a few new models which imply probabilistic guarantees for solutions and yield less conservative results comparing to traditional models. Numerical experiments are performed on portfolio selection problems to demonstrate the strength of our models.
An Effective Handover Analysis for the Randomly Distributed Heterogeneous Cellular Networks ; Handover rate is one of the most import metrics to instruct mobility management and resource management in wireless cellular networks. In the literature, the mathematical expression of handover rate has been derived for homogeneous cellular network by both regular hexagon coverage model and stochastic geometry model, but there has not been any reliable result for heterogeneous cellular networks HCNs. Recently, stochastic geometry modeling has been shown to model well the real deployment of HCNs and has been extensively used to analyze HCNs. In this paper, we give an effective handover analysis for HCNs by stochastic geometry modeling, derive the mathematical expression of handover rate by employing an infinitesimal method for a generalized multitier scenario, discuss the result by deriving some meaningful corollaries, and validate the analysis by computer simulation with multiple walking models. By our analysis, we find that in HCNs the handover rate is related to many factors like the base stations' densities and transmitting powers, user's velocity distribution, bias factor, pass loss factor and etc. Although our analysis focuses on the scenario of multitier HCNs, the analytical framework can be easily extended for more complex scenarios, and may shed some light for future study.
Mathematical existence results for the DoiEdwards polymer model ; In this paper, we present some mathematical results on the DoiEdwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and tested extensively in modeling and simulation of polymer flows. From a mathematical point of view, the DoiEdwards model consists in a strong coupling between the NavierStokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the wellposedness of the DoiEdwards model, namely it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.
Simulationbased Sensitivity Analysis for Nonignorable Missing Data ; Sensitivity analysis is popular in dealing with missing data problems particularly for nonignorable missingness. It analyses how sensitively the conclusions may depend on assumptions about missing data e.g. missing data mechanism MDM. We called models under certain assumptions sensitivity models. To make sensitivity analysis useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models. However, the assessment is difficult when the missing data mechanism is missing not at random MNAR. We propose a novel approach in this paper on attempting to investigate those assumptions based on the nearestneighbour KNN distances of simulated datasets from various MNAR models. The method is generic and it has been applied successfully to several specific models in this paper including metaanalysis model with publication bias, analysis of incomplete longitudinal data and regression analysis with nonignorable missing covariates.
Higgs Flavor Violation as a Signal to Discriminate Models ; We consider the Higgs Lepton Flavor Violating process h rightarrow tau mu, in which CMS found a 2.5 sigma excess of events, from a model independent perspective, and find that it is difficult to generate this operator without also obtaining a sizeable Wilson coefficient for the dipole operators responsible for tau radiative decay, constrained by BABAR to BRtau rightarrow mu gamma 4.4 times 108 . We then survey a set of representative models for new physics, to determine which ones are capable of evading this problem. We conclude that, should this measurement persist as a signal, typeIII Two Higgs Doublet Models and Higgs portallike models are favored, while SUSY and Composite Higgs models are unlikely to explain it.
A MILP model for single machine family scheduling with sequencedependent batch setup and controllable processing times ; A mathematical programming model for a class of single machine family scheduling problem is described in this technical report, with the aim of comparing the performance in solving the scheduling problem by means of mathematical programming with the performance obtained when using optimal control strategies, that can be derived from the application of a dynamic programmingbased methodology proposed by the Author. The scheduling problem is characterized by the presence of sequencedependent batch setup and controllable processing times; moreover, the generalized duedate model is adopted in the problem. Three mixedinteger linear programming MILP models are proposed. The best one, from the performance point of view, is a model which makes use of two sets of binary variables the former to define the relative position of jobs and the latter to define the exact sequence of jobs. In addition, one of the model exploits a stagebased state space representation which can be adopted to define the dynamics of the system.
Exploring the low redshift universe two parametric models for effective pressure ; Astrophysical observations have put unprecedentedly tight constraints on cosmological theories. The LambdaCDM model, mathematically simple and fits observational datasets well, is preferred for explaining the behavior of universe. But many basic features of the dark sectors are still unknown, which leaves rooms for various nonstandard cosmological hypotheses. As the pressure of cosmological constant dark energy is unvarying, ignoring contributions from radiation and curvature terms at low redshift, the effective pressure keeps constant. In this paper, we propose two parametric models for nonconstant effective pressure in order to study the tiny deviation from LambdaCDM at low redshift. We recover our phenomenological models in the scenarios of quintessence and phantom fields, and explore the behavior of scalar field and potential. We constrain our model parameters with SNe Ia and BAO observations, and detect subtle hints of omegade1 from the data fitting results of both models, which indicates possibly a phantom dark energy scenario at present.
Regge calculus models of closed lattice universes ; This paper examines the behaviour of closed lattice universes' wherein masses are distributed in a regular lattice on the Cauchy surfaces of closed vacuum universes. Such universes are approximated using a form of Regge calculus originally developed by Collins and Williams to model closed FLRW universes. We consider two types of lattice universes, one where all masses are identical to each other and another where one mass gets perturbed in magnitude. In the unperturbed universe, we consider the possible arrangements of the masses in the Regge Cauchy surfaces and demonstrate that the model will only be stable if each mass lies within some spherical region of convergence. We also briefly discuss the existence of Regge models that are dual to the ones we have considered. We then model a perturbed lattice universe and demonstrate that the model's evolution is wellbehaved, with the expansion increasing in magnitude as the perturbation is increased.
'Constraint consistency' at all orders in Cosmological perturbation theory ; We study the equivalence of two orderbyorder Einstein's equation and Reduced action approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' that needs to be satisfied. We propose a quick and efficient method to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all constraint' inconsistent models have higher order Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint lapse function and shift vector, though it may have Ostrogradsky's instabilities. We also obtain the single variable equation for noncanonical scalar field in the limit of powerlaw inflation for the secondorder perturbed variables.
Temporal Embedding in Convolutional Neural Networks for Robust Learning of Abstract Snippets ; The prediction of periodical timeseries remains challenging due to various types of data distortions and misalignments. Here, we propose a novel model called Temporal embeddingenhanced convolutional neural Network TeNet to learn repeatedlyoccurringyethidden structural elements in periodical timeseries, called abstract snippets, for predicting future changes. Our model uses convolutional neural networks and embeds a timeseries with its potential neighbors in the temporal domain for aligning it to the dominant patterns in the dataset. The model is robust to distortions and misalignments in the temporal domain and demonstrates strong prediction power for periodical timeseries. We conduct extensive experiments and discover that the proposed model shows significant and consistent advantages over existing methods on a variety of data modalities ranging from human mobility to household power consumption records. Empirical results indicate that the model is robust to various factors such as number of samples, variance of data, numerical ranges of data etc. The experiments also verify that the intuition behind the model can be generalized to multiple data types and applications and promises significant improvement in prediction performances across the datasets studied.
A NonProbabilistic Model of Relativised Predictability in Physics ; Little effort has been devoted to studying generalised notions or models of unpredictability, yet is an important concept throughout physics and plays a central role in quantum information theory, where key results rely on the supposed inherent unpredictability of measurement outcomes. In this paper we continue the programme started in 1 developing a general, nonprobabilistic model of unpredictability in physics. We present a more refined model that is capable of studying different degrees of relativised unpredictability. This model is based on the ability for an agent, acting via uniform, effective means, to predict correctly and reproducibly the outcome of an experiment using finite information extracted from the environment. We use this model to study further the degree of unpredictability certified by different quantum phenomena, showing that quantum complementarity guarantees a form of relativised unpredictability that is weaker than that guaranteed by KochenSpeckertype value indefiniteness. We exemplify further the difference between certification by complementarity and value indefiniteness by showing that, unlike value indefiniteness, complementarity is compatible with the production of computable sequences of bits.
Weakly Supervised Learning of Objects, Attributes and their Associations ; When humans describe images they tend to use combinations of nouns and adjectives, corresponding to objects and their associated attributes respectively. To generate such a description automatically, one needs to model objects, attributes and their associations. Conventional methods require strong annotation of object and attribute locations, making them less scalable. In this paper, we model objectattribute associations from weakly labelled images, such as those widely available on media sharing sites e.g. Flickr, where only imagelevel labels either object or attributes are given, without their locations and associations. This is achieved by introducing a novel weakly supervised nonparametric Bayesian model. Once learned, given a new image, our model can describe the image, including objects, attributes and their associations, as well as their locations and segmentation. Extensive experiments on benchmark datasets demonstrate that our weakly supervised model performs at par with strongly supervised models on tasks such as image description and retrieval based on objectattribute associations.
Bengali to Assamese Statistical Machine Translation using Moses Corpus Based ; Machine dialect interpretation assumes a real part in encouraging manmachine correspondence and in addition menmen correspondence in Natural Language Processing NLP. Machine Translation MT alludes to utilizing machine to change one dialect to an alternate. Statistical Machine Translation is a type of MT consisting of Language Model LM, Translation Model TM and decoder. In this paper, Bengali to Assamese Statistical Machine Translation Model has been created by utilizing Moses. Other translation tools like IRSTLM for Language Model and GIZAPPV1.0.7 for Translation model are utilized within this framework which is accessible in Linux situations. The purpose of the LM is to encourage fluent output and the purpose of TM is to encourage similarity between input and output, the decoder increases the probability of translated text in target language. A parallel corpus of 17100 sentences in Bengali and Assamese has been utilized for preparing within this framework. Measurable MT procedures have not so far been generally investigated for Indian dialects. It might be intriguing to discover to what degree these models can help the immense continuous MT deliberations in the nation.
Predicting sports scoring dynamics with restoration and antipersistence ; Professional team sports provide an excellent domain for studying the dynamics of social competitions. These games are constructed with simple, welldefined rules and payoffs that admit a highdimensional set of possible actions and nontrivial scoring dynamics. The resulting gameplay and efforts to predict its evolution are the object of great interest to both sports professionals and enthusiasts. In this paper, we consider two online prediction problems for team sportsgiven a partially observed game Who will score next and ultimately Who will win We present novel interpretable generative models of withingame scoring that allow for dependence on lead size restoration and on the last team to score antipersistence. We then apply these models to comprehensive withingame scoring data for four sports leagues over a ten year period. By assessing these models' relative goodnessoffit we shed new light on the underlying mechanisms driving the observed scoring dynamics of each sport. Furthermore, in both predictive tasks, the performance of our models consistently outperforms baselines models, and our models make quantitative assessments of the latent team skill, over time.
NonGaussian Discriminative Factor Models via the MaxMargin RankLikelihood ; We consider the problem of discriminative factor analysis for data that are in general nonGaussian. A Bayesian model based on the ranks of the data is proposed. We first introduce a new em maxmargin version of the ranklikelihood. A discriminative factor model is then developed, integrating the maxmargin ranklikelihood and linear Bayesian support vector machines, which are also built on the maxmargin principle. The discriminative factor model is further extended to the em nonlinear case through mixtures of local linear classifiers, via Dirichlet processes. Fully local conjugacy of the model yields efficient inference with both Markov Chain Monte Carlo and variational Bayes approaches. Extensive experiments on benchmark and real data demonstrate superior performance of the proposed model and its potential for applications in computational biology.
Unitarity sum rules, three site moose model, and the ATLAS 2 TeV diboson anomalies ; We investigate W' interpretations for the ATLAS 2 TeV diboson anomalies. The roles of the unitarity sum rules, which ensure the perturbativity of the longitudinal vector boson scattering amplitudes, are emphasized. We find the unitarity sum rules and the custodial symmetry are powerful enough to predict various nontrivial relations among WWZ', WZW', WWh, WW'h and ZZ'h coupling strengths in a model independent manner. We also perform surveys in the general parameter space of W' models and find the ATLAS 2 TeV diboson anomalies may be interpreted as a W' particle of the three site moose model, i.e., a KaluzaKlein like particle in a deconstructed extra dimension model. It is also shown that the non standardmodellike Higgs boson is favored by the present data to interpret the ATLAS diboson anomalies as the consequences of the W' and Z' bosons.
Global solution for a kinetic chemotaxis model with internal dynamics and its fast adaptation limit ; A nonlinear kinetic chemotaxis model with internal dynamics incorporating signal transduction and adaptation is considered. This paper is concerned with i the global solution for this model, and, ii its fast adaptation limit to OthmerDunbarAlt type model. This limit gives some insight to the molecular origin of the chemotaxis behaviour. First, by using the Schauder fixed point theorem, the global existence of weak solution is proved based on detailed a priori estimates, under some quite general assumptions on the model and the initial data. However, the Schauder fixed point theorem does not provide uniqueness. Therefore, additional analysis is required to be developed to obtain uniqueness. Next, the fast adaptation limit of this model is derived by extracting a weak convergence subsequence in measure space. For this limit, the first difficulty is to show the concentration effect on the internal state. When the small parameter epsilon, the adaptation time scale, goes to zero, we prove that the solution converges to a Dirac mass in the internal state variable. Another difficulty is the strong compactness argument on the chemical potential, which is essential for passing the nonlinear kinetic equation to the weak limit.
Gauging the Relativistic Particle Model on the Noncommutative plane ; We construct a new model for relativistic particle on the noncommutative surface in 21 dimensions, using the symplectic formalism of constrained systems and embedding the model on an extended phase space. We suggest a short cut to construct the gauged Lagrangian, using the Poisson algebra of constraints, without calculating the whole procedure of symplectic formalism. We also propose an approach for the systems, in which the symplectic formalism is not applicable, due to truncation of secondary constraints appearing at the first level. After gauging the model, we obtained generators of gauge transformations of the model. Finally, by extracting the corresponding Poisson structure of all constraints, we show the effect of gauging on the canonical structure of the phase spaces of both primary and gauged models.
Upper bounds for number of removed edges in the Erased Configuration Model ; Models for generating simple graphs are important in the study of realworld complex networks. A well established example of such a model is the erased configuration model, where each node receives a number of halfedges that are connected to halfedges of other nodes at random, and then selfloops are removed and multiple edges are concatenated to make the graph simple. Although asymptotic results for many properties of this model, such as the limiting degree distribution, are known, the exact speed of convergence in terms of the graph sizes remains an open question. We provide a first answer by analyzing the size dependence of the average number of removed edges in the erased configuration model. By combining known upper bounds with a Tauberian Theorem we obtain upper bounds for the number of removed edges, in terms of the size of the graph. Remarkably, when the degree distribution follows a powerlaw, we observe three scaling regimes, depending on the power law exponent. Our results provide a strong theoretical basis for evaluating finitesize effects in networks.
Inference in Ising Models ; The Ising spin glass is a oneparameter exponential family model for binary data with quadratic sufficient statistic. In this paper, we show that given a single realization from this model, the maximum pseudolikelihood estimate MPLE of the natural parameter is sqrt aNconsistent at a point whenever the logpartition function has order aN in a neighborhood of that point. This gives consistency rates of the MPLE for ferromagnetic Ising models on general weighted graphs in all regimes, extending the results of Chatterjee 2007 where only sqrt Nconsistency of the MPLE was shown. It is also shown that consistent testing, and hence estimation, is impossible in the high temperature phase in ferromagnetic Ising models on a converging sequence of simple graphs, which include the CurieWeiss model. In this regime, the sufficient statistic is distributed as a weighted sum of independent chi21 random variables, and the asymptotic power of the most powerful test is determined. We also illustrate applications of our results on synthetic and realworld network data.
Collaborative Representation Classification Ensemble for Face Recognition ; Collaborative Representation Classification CRC for face recognition attracts a lot attention recently due to its good recognition performance and fast speed. Compared to Sparse Representation Classification SRC, CRC achieves a comparable recognition performance with 101000 times faster speed. In this paper, we propose to ensemble several CRC models to promote the recognition rate, where each CRC model uses different and divergent randomly generated biologicallyinspired features as the face representation. The proposed ensemble algorithm calculates an ensemble weight for each CRC model that guided by the underlying classification rule of CRC. The obtained weights reflect the confidences of those CRC models where the more confident CRC models have larger weights. The proposed weighted ensemble method proves to be very effective and improves the performance of each CRC model significantly. Extensive experiments are conducted to show the superior performance of the proposed method.
Efficient computation of Bayesian optimal discriminating designs ; An efficient algorithm for the determination of Bayesian optimal discriminating designs for competing regression models is developed, where the main focus is on models with general distributional assumptions beyond the classical case of normally distributed homoscedastic errors. For this purpose we consider a Bayesian version of the Kullback Leibler KL optimality criterion introduced by L'opezFidalgo et al. 2007. Discretizing the prior distribution leads to local KLoptimal discriminating design problems for a large number of competing models. All currently available methods either require a large computation time or fail to calculate the optimal discriminating design, because they can only deal efficiently with a few model comparisons. In this paper we develop a new algorithm for the determination of Bayesian optimal discriminating designs with respect to the KullbackLeibler criterion. It is demonstrated that the new algorithm is able to calculate the optimal discriminating designs with reasonable accuracy and computational time in situations where all currently available procedures are either slow or fail.
Far tails of the density of states in amorphous organic semiconductors ; Far tails of the density of state DOS are calculated for the simple models of organic amorphous material, the model of dipolar glass and model of quadrupolar glass. It was found that in both models far tails are nonGaussian. In the dipolar glass model the DOS is symmetric around zero energy, while for the model of quadrupolar glass the DOS is generally asymmetric and its asymmetry is directly related to the particular geometry of quadrupoles. Far tails of the DOS are relevant for the quasiequilibrium transport of the charge carriers at low temperature. Asymmetry of DOS in quadrupolar glasses means a principal inequivalence of the random energy landscape for the transport of electrons and holes. Possible effect of the nonGaussian shape of the far tails of the DOS on the temperature dependence of carrier drift mobility is discussed.
Hierarchical Models as Marginals of Hierarchical Models ; We investigate the representation of hierarchical models in terms of marginals of other hierarchical models with smaller interactions. We focus on binary variables and marginals of pairwise interaction models whose hidden variables are conditionally independent given the visible variables. In this case the problem is equivalent to the representation of linear subspaces of polynomials by feedforward neural networks with softplus computational units. We show that every hidden variable can freely model multiple interactions among the visible variables, which allows us to generalize and improve previous results. In particular, we show that a restricted Boltzmann machine with less than 2logv1 v1 2v1 hidden binary variables can approximate every distribution of v visible binary variables arbitrarily well, compared to 2v11 from the best previously known result.
Bulk viscous Zel'dovich fluid model and it's asymptotic behavior ; In this paper we have considered a flat FLRW universe with bulk viscous Zel'dovich as the cosmic component. Being considered the bulk viscosity as per the Eckart formalism, we have analyzed the evolution of the Hubble parameter and constrained the model with the Type Ia Supernovae data thus extracting the constant bulk viscous parameter and present Hubble parameter. Further we have analyzed the scale factor, equation of state and deceleration parameter. The model predicts the late time acceleration and is also compatible with the age of the universe as given by the oldest globular clusters. We have also studied the phasespace behavior of the model and found that a universe dominated by bulk viscous Zel'dovich fluid is stable. But on the inclusion of radiation component in addition to the Zel'dovich fluid, makes the model unstable. Hence, even though the bulk viscous Zel'dovich fluid dominated universe is a feasible one, the model as such failed to predict a prior radiation dominated phase.
A cosmological model of the early universe based on ECG with variable term in Lyra geometry ; In this paper, we study interacting extended Chaplygin gas as dark matter and quintessence scalar field as dark energy with an effective Lambdaterm in Lyra manifold. As we know Chaplygin gas behaves as dark matter at the early universe while cosmological constant at the late time. Modified field equations are given and motivation of the phenomenological models discussed in details. Four different models based on the interaction term are investigated in this work. Then, we consider other models where Extended Chaplygin gas and quintessence field play role of dark matter and dark energy respectively with two different forms of interaction between the extended Chaplygin gas and quintessence scalar field for both constant and varying Lambda. Concerning to the mathematical hardness of the problems we discuss results numerically and graphically. Obtained results give us hope that proposed models can work as good models for the early universe with later stage of evolution containing accelerated expansion.
A Scalable and Extensible Framework for SuperpositionStructured Models ; In many learning tasks, structural models usually lead to better interpretability and higher generalization performance. In recent years, however, the simple structural models such as lasso are frequently proved to be insufficient. Accordingly, there has been a lot of work on superpositionstructured models where multiple structural constraints are imposed. To efficiently solve these superpositionstructured statistical models, we develop a framework based on a proximal Newtontype method. Employing the smoothed conic dual approach with the LBFGS updating formula, we propose a scalable and extensible proximal quasiNewton SEPQN framework. Empirical analysis on various datasets shows that our framework is potentially powerful, and achieves superlinear convergence rate for optimizing some popular superpositionstructured statistical models such as the fused sparse group lasso.
Dynamic Poisson Factorization ; Models for recommender systems use latent factors to explain the preferences and behaviors of users with respect to a set of items e.g., movies, books, academic papers. Typically, the latent factors are assumed to be static and, given these factors, the observed preferences and behaviors of users are assumed to be generated without order. These assumptions limit the explorative and predictive capabilities of such models, since users' interests and item popularity may evolve over time. To address this, we propose dPF, a dynamic matrix factorization model based on the recent Poisson factorization model for recommendations. dPF models the time evolving latent factors with a Kalman filter and the actions with Poisson distributions. We derive a scalable variational inference algorithm to infer the latent factors. Finally, we demonstrate dPF on 10 years of user click data from arXiv.org, one of the largest repository of scientific papers and a formidable source of information about the behavior of scientists. Empirically we show performance improvement over both static and, more recently proposed, dynamic recommendation models. We also provide a thorough exploration of the inferred posteriors over the latent variables.
Stability Analysis of NonNewtonian Rimming Flow ; The rimming flow of a viscoelastic thin film inside a rotating horizontal cylinder is studied theoretically. Attention is given to the onset of nonNewtonian freesurface instability in creeping flow. This noninertial instability has been observed in experiments, but current theoretical models of Newtonian fluids can neither describe its origin nor explain its onset. This study examines two models of non Newtonian fluids to see if the experimentally observed instability can be predicted analytically. The nonNewtonian viscosity and elastic properties of the fluid are described by the Generalized Newtonian Fluid GNF and Second Order Viscoelastic Fluid SOVF constitutive models, respectively. With linear stability analysis, it is found that, analogously to the Newtonian fluid, rimming flow of viscous nonNewtonian fluids modeled by GNF is neutrally stable. However, the viscoelastic properties of the fluid modeled by SOVF are found to contribute to the flow destabilization. The instability is shown to increase as the cylinder rotation rate is lowered, from which the fluid accumulates in a pool on the rising wall. Viscoelastic effects coupled with this pooling cause the fluid's angular stretching, which is suggested to be responsible for this onset of instability.
Building prime models in fully good abstract elementary classes ; We show how to build primes models in classes of saturated models of abstract elementary classes AECs having a wellbehaved independence relation mathbfTheorem. Let K be an almost fully good AEC that is categorical in textLS K and has the textLS Kexistence property for domination triples. For any lambda textLS K, the class of Galois saturated models of K of size lambda has prime models over every set of the form M cup a. This generalizes an argument of Shelah, who proved the result when lambda is a successor cardinal.
Reformulation of the GeorgiGlashow model and some constraints on its classical fields ; We study the SU2 GeorgiGlashow model and suggest a decomposition for its fields and obtain a Lagrangian based on new variables. We use Cho's restricted decomposition as a result of a vacuum condition of the GeorgiGlashow model. This model with no external sources leads us to the Cho extended decomposition. We interpret the puzzling field, textbfn , in Cho's decomposition as the color direction of the scalar field in the GeorgiGlashow model. We also study another constraint, condensate phase, and generalize Cho's extended decomposition. Finally, we argue about a decomposition form that Faddeev and Niemi proposed in this constrained GeorgiGlashow model.
Learning a Discriminative Model for the Perception of Realism in Composite Images ; What makes an image appear realistic In this work, we are answering this question from a datadriven perspective by learning the perception of visual realism directly from large amounts of data. In particular, we train a Convolutional Neural Network CNN model that distinguishes natural photographs from automatically generated composite images. The model learns to predict visual realism of a scene in terms of color, lighting and texture compatibility, without any human annotations pertaining to it. Our model outperforms previous works that rely on handcrafted heuristics, for the task of classifying realistic vs. unrealistic photos. Furthermore, we apply our learned model to compute optimal parameters of a compositing method, to maximize the visual realism score predicted by our CNN model. We demonstrate its advantage against existing methods via a human perception study.
The top quark right coupling in the tbWvertex ; The most general parametrization of the tbW vertex includes a right coupling VR that is zero at tree level in the standard model. This quantity may be measured at the Large Hadron Collider where the physics of the top decay is currently investigated. This coupling is present in new physics models at tree level andor through radiative corrections, so its measurement can be sensitive to non standard physics. In this paper we compute the leading electroweak and QCD contributions to the top VR coupling in the standard model. This value is the starting point in order to separate the standard model effects and, then, search for new physics. We also propose observables that can be addressed at the LHC in order to measure this coupling. These observables are defined in such a way that they do not receive tree level contributions from the standard model and are directly proportional to the right coupling. Bounds on new physics models can be obtained through the measurements of these observables.
Learning optimal quantum models is NPhard ; Physical modeling closes the gap between perception in terms of measurements and abstraction in terms of theoretical models. Physical modeling is a major objective in physics and is generally regarded as a creative process. How good are computers at solving this task This question is both of philosophical and practical interest because a positive answer would allow an artificial intelligence to understand the physical world. Quantum mechanics is the most fundamental physical theory and there is a deep belief that nature follows the rules of quantum mechanics. Hence, we raise the question whether computers are able to learn optimal quantum models from measured data. Here we show that in the absence of physical heuristics, the inference of optimal quantum models cannot be computed efficiently unless P NP. This result illuminates rigorous limits to the extent to which computers can be used to further our understanding of nature.
Statistical Channel Model with MultiFrequency and Arbitrary Antenna Beamwidth for MillimeterWave Outdoor Communications ; This paper presents a 3dimensional millimeterwave statistical channel impulse response model from 28 GHz and 73 GHz ultrawideband propagation measurements. An accurate 3GPPlike channel model that supports arbitrary carrier frequency, RF bandwidth, and antenna beamwidth for both omnidirectional and arbitrary directional antennas, is provided. Time cluster and spatial lobe model parameters are extracted from empirical distributions from field measurements. A stepbystep modeling procedure for generating channel coefficients is shown to agree with statistics from the field measurements, thus confirming that the statistical channel model faithfully recreates spatial and temporal channel impulse responses for use in millimeterwave 5G air interface designs.
Efficient and accurate modeling of multiwavelength propagation in SOAs a generalized coupledmode approach ; We present a model for multiwavelength mixing in semiconductor optical amplifiers SOAs based on coupledmode equations. The proposed model applies to all kinds of SOA structures, takes into account the longitudinal dependence of carrier density caused by saturation, it accommodates an arbitrary functional dependencies of the material gain and carrier recombination rate on the local value of carrier density, and is computationally more efficient by orders of magnitude as compared with the standard full model based on spacetime equations. We apply the coupledmode equations model to a recently demonstrated phasesensitive amplifier based on an integrated SOA and prove its results to be consistent with the experimental data. The accuracy of the proposed model is certified by means of a meticulous comparison with the results obtained by integrating the spacetime equations.
The LiebLiniger model at the critical point as toy model for Black Holes ; In a previous series of papers it was proposed that black holes can be understood as BoseEinstein condensates at the critical point of a quantum phase transition. Therefore other bosonic systems with quantum criticalities, such as the LiebLiniger model with attractive interactions, could possibly be used as toy models for black holes. Even such simple models are hard to analyse, as mean field theory usually breaks down at the critical point. Very few analytic results are known. In this paper we present a method of studying such systems at quantum critical points analytically. We will be able to find explicit expressions for the low energy spectrum of the LiebLiniger model and thereby to confirm the expected black hole like properties of such systems. This opens up an exciting possibility of constructing and studying black hole like systems in the laboratory.
Crosssectional Markov model for trend analysis of observed discrete distributions of population characteristics ; We present a stochastic model of population dynamics exploiting crosssectional data in trend analysis and forecasts for groups and cohorts of a population. While sharing the convenient features of classic Markov models, it alleviates the practical problems experienced in longitudinal studies. Based on statistical and informationtheoretical analysis, we adopt maximum likelihood estimation to determine model parameters, facilitating the use of a range of model selection methods. Their application to several synthetic and empirical datasets shows that the proposed approach is robust, stable and superior to a regressionbased one. We extend the basic framework to simulate ageing cohorts, processes with finite memory, distinguishing their short and longterm trends, introduce regularisation to avoid the ecological fallacy, and generalise it to mixtures of crosssectional and possibly incomplete longitudinal data. The presented model illustrations yield new and interesting results, such as an implied common driving factor in obesity for all generations of the English population and yoyo dieting in the U.S. data.
Learning Continuous Control Policies by Stochastic Value Gradients ; We present a unified framework for learning continuous control policies using backpropagation. It supports stochastic control by treating stochasticity in the Bellman equation as a deterministic function of exogenous noise. The product is a spectrum of general policy gradient algorithms that range from modelfree methods with value functions to modelbased methods without value functions. We use learned models but only require observations from the environment in stead of observations from modelpredicted trajectories, minimizing the impact of compounded model errors. We apply these algorithms first to a toy stochastic control problem and then to several physicsbased control problems in simulation. One of these variants, SVG1, shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains.
Geometrical Models and Hadronic Radii ; By using electromagnetic form factors predicted by Generalized Chou Yang model GCYM, we compute root mean square rms radii of several hadrons with varying strangeness content number of strange quarksantiquarks such as pion, proton, phi0, Lambda0, Sigma, Sigma and Omega The computed radii are found quite consistent with the experimental results and those from other models for pion and proton. For hadrons other than pion and proton, the experimental results are not available and also the GCYM and other models results are not consistent with each other. The computed rms radii from GCYM and other models indicate that rms radii decrease with increase in strangeness content, separately for mesons and baryons. The experimental results of hadrons other than pion and proton will throw more light on the suitability of GCYM and other models.
PAC LearningBased Verification and Model Synthesis ; We introduce a novel technique for verification and model synthesis of sequential programs. Our technique is based on learning a regular model of the set of feasible paths in a program, and testing whether this model contains an incorrect behavior. Exact learning algorithms require checking equivalence between the model and the program, which is a difficult problem, in general undecidable. Our learning procedure is therefore based on the framework of probably approximately correct PAC learning, which uses sampling instead and provides correctness guarantees expressed using the terms error probability and confidence. Besides the verification result, our procedure also outputs the model with the said correctness guarantees. Obtained preliminary experiments show encouraging results, in some cases even outperforming mature software verifiers.
Formal Specification and Verification of Fully Asynchronous Implementations of the Data Encryption Standard ; This paper presents two formal models of the Data Encryption Standard DES, a first using the international standard LOTOS, and a second using the more recent process calculus LNT. Both models encode the DES in the style of asynchronous circuits, i.e., the dataflow blocks of the DES algorithm are represented by processes communicating via rendezvous. To ensure correctness of the models, several techniques have been applied, including model checking, equivalence checking, and comparing the results produced by a prototype automatically generated from the formal model with those of existing implementations of the DES. The complete code of the models is provided as appendices and also available on the website of the CADP verification toolbox.
Spectral Model of NonStationary, Inhomogeneous Turbulence ; We compare results from a spectral model for nonstationary, inhomogeneous turbulence Besnard et al., Theor. Comp. Fluid. Dyn., vol. 8, pp 135, 1996 with Direct Numerical Simulation DNS data of a shearfree mixing layer SFML Tordella et al., Phys. Rev. E, vol. 77, 016309, 2008. The SFML is used as a test case in which the efficacy of the model closure for the physicalspace transport of the fluid velocity field can be tested in a flow with inhomogeneity, without the additional complexity of meanflow coupling. The model is able to capture certain features of the SFML quite well for intermediate to longtimes, including the evolution of the mixinglayer width and turbulent kinetic energy. At shorttimes, and for more sensitive statistics such as the generation of the velocity field anisotropy, the model is less accurate. We present arguments, supported by the DNS data, that a significant cause of the discrepancies is the local approximation to the intrinsically nonlocal pressuretransport in physicalspace that was made in the model, the effects of which would be particularly strong at shorttimes when the inhomogeneity of the SFML is strongest.
Nonthermal production of dark radiation and dark matter ; Dark matter may be coupled to dark radiation light degrees of freedom that mediate forces between dark sector particles. Cosmological constraints favor dark radiation that is colder than Standard Model radiation. In models with fixed couplings between dark matter and the Standard Model, these constraints can be difficult to satisfy if thermal equilibrium is assumed in the early universe. We construct a model of asymmetric reheating of the visible and dark sectors from late decays of a longlived particle for instance, a modulus. We show, as a proofofprinciple, that such a model can populate a sufficiently cold dark sector while also generating baryon and dark matter asymmetries through the out of equilibrium decay. We frame much of our discussion in terms of the scenario of dissipative dark matter, as in the DoubleDisk Dark Matter scenario. However, our results may also be of interest for other scenarios like the Twin Higgs model that are in danger of overproducing dark radiation due to nonnegligible darkvisible couplings.
Generalized breakup and coalescence models for population balance modelling of liquidliquid flows ; Population balance framework is a useful tool that can be used to describe size distribution of droplets in a liquidliquid dispersion. Breakup and coalescence models provide closures for mathematical formulation of the population balance equation PBE and are crucial for accu rate predictions of the mean droplet size in the flow. Number of closures for both breakup and coalescence can be identified in the literature and most of them need an estimation of model parameters that can differ even by several orders of magnitude on a case to case basis. In this paper we review the fundamental assumptions and derivation of breakup and coalescence ker nels. Subsequently, we rigorously apply twostage optimization over several independent sets of experiments in order to identify model parameters. Twostage identification allows us to estab lish new parametric dependencies valid for experiments that vary over large ranges of important nondimensional groups. This be adopted for optimization of parameters in breakup and co alescence models over multiple cases and we propose a correlation based on nondimensional numbers that is applicable to number of different flows over wide range of Reynolds numbers.
Financial market models in discrete time beyond the concave case ; In this article we propose a study of market models starting from a set of axioms, as one does in the case of risk measures. We define a market model simply as a mapping from the set of adapted strategies to the set of random variables describing the outcome of trading. We do not make any concavity assumptions. The first result is that under sequential uppersemicontinuity the market model can be represented as a normal integrand. We then extend the concept of noarbitrage to this setup and study its consequences as the superhedging theorem and utility maximization. Finally, we show how to extend the concepts and results to the case of vectorvalued market models, an example of which is the Kabanov model of currency markets.
Continuity properties of the semigroup and its integral kernel in nonrelativistic QED ; Employing recent results on stochastic differential equations associated with the standard model of nonrelativistic quantum electrodynamics by B. Guneysu, J.S. Moller, and the present author, we study the continuity of the corresponding semigroup between weighted vectorvalued Lpspaces, continuity properties of elements in the range of the semigroup, and the pointwise continuity of an operatorvalued semigroup kernel. We further discuss the continuous dependence of the semigroup and its integral kernel on model parameters. All these results are obtained for Kato decomposable electrostatic potentials and the actual assumptions on the model are general enough to cover the Nelson model as well. As a corollary we obtain some new pointwise exponential decay and continuity results on elements of lowenergetic spectral subspaces of atoms or molecules that also take spin into account. In a simpler situation where spin is neglected we explain how to verify the joint continuity of positive ground state eigenvectors with respect to spatial coordinates and model parameters. There are no smallness assumptions imposed on any model parameter.
Exact masscoupling relation for the homogeneous sineGordon model ; We derive the exact masscoupling relation of the simplest multiscale quantum integrable model, i.e., the homogeneous sineGordon model with two mass scales. The relation is obtained by comparing the perturbed conformal field theory description of the model valid at short distances to the large distance bootstrap description based on the model's integrability. In particular, we find a differential equation for the relation by constructing conserved tensor currents which satisfy a generalization of the Theta sum rule Ward identity. The masscoupling relation is written in terms of hypergeometric functions.
Manipulating Pand Selastic waves in dielectric elastomers via external electric stimuli ; We investigate elastic wave propagation in finitely deformed dielectric elastomers in the presence of an electrostatic field. To analyze the propagation of both longitudinal P and transverse S waves, we utilize compressible material models. We derive explicit expressions of the generalized acoustic tensor and phase velocities of elastic waves for the ideal and enriched dielectric elastomer models. We analyze the slowness curves of the elastic wave propagation, and find the PSmode disentangling phenomenon. In particular, P and S waves are separated by the application of an electric field. The divergence angle between P and Swaves strongly depends on the applied electrostatic excitation. The influence of the electric field is sensitive to material models. Thus, for ideal dielectric model the inplane shear velocity increases with an increase in electric field, while for the enriched model the velocity may decreases depending on material constants. Similarly, the divergence angle gradually increases with an increase in electric field, while for the enriched model, the angle may be bounded. Material compressibility affects the Pwave velocity, and, for relatively compressible materials, the slowness curves evolve from circular to elliptical shapes manifesting in an increase of the reflection angle of Pwaves. As a results, the divergence angle decreases with an increase in material compressibility.
A SUSY Inspired Simplified Model for the 750 GeV Diphoton Excess ; The evidence for a new singlet scalar particle from the 750 GeV diphoton excess, and the absence of any other signal of new physics at the LHC so far, suggest the existence of new coloured scalars. To study this possibility, we propose a supersymmetry inspired simplified model, extending the Standard Model with a singlet scalar and with heavy scalar fields carrying both colour and electric charges the squarks'. To allow the latter to decay, and to generate the dark matter of the Universe, we also add a neutral fermion to the particle content. We show that this model provides a twoparameter fit to the observed diphoton excess consistently with cosmology, while the allowed parameter space is bounded by the consistency of the model. In the context of our simplified model this implies the existence of other supersymmetric particles accessible at the LHC, rendering this scenario falsifiable. If this excess persists, it will imply a paradigm shift in assessing supersymmetry breaking and the role of scalars in low scale physics.
On weak model sets of extremal density ; The theory of regular model sets is highly developed, but does not cover examples such as the visible lattice points, the kth powerfree integers, or related systems. They belong to the class of weak model sets, where the window may have a boundary of positive measure, or even consists of boundary only. The latter phenomena are related to the topological entropy of the corresponding dynamical system and to various other unusual properties. Under a rather natural extremality assumption on the density of the weak model set we establish its pure point diffraction nature. We derive an explicit formula that can be seen as the generalisation of the case of regular model sets. Furthermore, the corresponding natural patch frequency measure is shown to be ergodic. Since weak model sets of extremal density are generic for this measure, one obtains that the dynamical spectrum of the hull is pure point as well.
Extended Chameleons ; We extend the chameleon models by considering ScalarFluid theories where the coupling between matter and the scalar field can be represented by a quadratic effective potential with densitydependent minimum and mass. In this context, we study the effects of the scalar field on Solar System tests of gravity and show that models passing these stringent constraints can still induce large modifications of Newton's law on galactic scales. On these scales we analyse models which could lead to a percent deviation of Newton's law outside the virial radius. We then model the dark matter halo as a NavarroFrenkWhite profile and explicitly find that the fifth force can give large contributions around the galactic core in a particular model where the scalar field mass is constant and the minimum of its potential varies linearly with the matter density. At cosmological distances, we find that this model does not alter the growth of large scale structures and therefore would be best tested on galactic scales, where interesting signatures might arise in the galaxy rotation curves.
Toptimal discriminating designs for Fourier regression models ; In this paper we consider the problem of constructing Toptimal discriminating designs for Fourier regression models. We provide explicit solutions of the optimal design problem for discriminating between two Fourier regression models, which differ by at most three trigonometric functions. In general, the Toptimal discriminating design depends in a complicated way on the parameters of the larger model, and for special configurations of the parameters Toptimal discriminating designs can be found analytically. Moreover, we also study this dependence in the remaining cases by calculating the optimal designs numerically. In particular, it is demonstrated that D and Dsoptimal designs have rather low efficiencies with respect to the Toptimality criterion.
Inflation model constraints from data released in 2015 ; We provide the latest constraints on the power spectra of both scalar and tensor perturbations from the CMB data including textitPlanck2015, BICEP2 textitKeck Array experiments and the new BAO scales from SDSSIII BOSS observation. We find that the inflation model with a convex potential is not favored and both the inflation model with a monomial potential and the natural inflation model are marginally disfavored at around 95 confidence level. But both the Brane inflation model and the Starobinsky inflation model fit the data quite well.
Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials ; This paper deals with the empirical validation of a building thermal model using a phase change material PCM in a complex roof. A mathematical model dedicated to phase change materials based on the heat apparent capacity method was implemented in a multizone building simulation code, the aim being to increase understanding of the thermal behavior of the whole building with PCM technologies. To empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model have been identified for optimization. The use of a generic optimization program called GenOpt coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of thermal model of PCM with measurements are found to be acceptable and are presented.
Nonparametric mixture of Gaussian graphical models ; Graphical model has been widely used to investigate the complex dependence structure of highdimensional data, and it is common to assume that observed data follow a homogeneous graphical model. However, observations usually come from different resources and have heterogeneous hidden commonality in realworld applications. Thus, it is of great importance to estimate heterogeneous dependencies and discover subpopulation with certain commonality across the whole population. In this work, we introduce a novel regularized estimation scheme for learning nonparametric mixture of Gaussian graphical models, which extends the methodology and applicability of Gaussian graphical models and mixture models. We propose a unified penalized likelihood approach to effectively estimate nonparametric functional parameters and heterogeneous graphical parameters. We further design an efficient generalized effective EM algorithm to address three significant challenges highdimensionality, nonconvexity, and label switching. Theoretically, we study both the algorithmic convergence of our proposed algorithm and the asymptotic properties of our proposed estimators. Numerically, we demonstrate the performance of our method in simulation studies and a real application to estimate human brain functional connectivity from ADHD imaging data, where two heterogeneous conditional dependencies are explained through profiling demographic variables and supported by existing scientific findings.
The Bianchi typeV Dark Energy Cosmology in Self Interacting Brans Dicke Theory of Gravity ; This paper deals with a spatially homogeneous and totally anisotropic Bianchi typeV cosmological model within the framework of self interacting Brans Dicke theory of gravity in the background of anisotropic dark energy DE with variable equation of state EoS parameter and constant deceleration parameter. Constant deceleration parameter leads to two models of universe, i.e. power law model and exponential model. EoS parameter omega and its existing range for the models is in good agreement with the most recent observational data. We notice that omega given by 37 i.e omegat logk1t is more suitable in explaining the evolution of the universe. The physical behaviors of the solutions have also been discussed using some physical quantities. Finally, we observe that despite having several prominent features, both of the DE models discussed fail in details.
Characterizing climate predictability and model response variability from multiple initial condition and multimodel ensembles ; Climate models are thought to solve boundary value problems unlike numerical weather prediction, which is an initial value problem. However, climate internal variability CIV is thought to be relatively important at nearterm 030 year prediction horizons, especially at higher resolutions. The recent availability of significant numbers of multimodel MME and multiinitial condition MICE ensembles allows for the first time a direct sensitivity analysis of CIV versus model response variability MRV. Understanding the relative agreement and variability of MME and MICE ensembles for multiple regions, resolutions, and projection horizons is critical for focusing model improvements, diagnostics, and prognosis, as well as impacts, adaptation, and vulnerability studies. Here we find that CIV MICE agreement is lower higher than MRV MME agreement across all spatial resolutions and projection time horizons for both temperature and precipitation. However, CIV dominates MRV over higher latitudes generally and in specific regions. Furthermore, CIV is considerably larger than MRV for precipitation compared to temperature across all horizontal and projection scales and seasons. Precipitation exhibits larger uncertainties, sharper decay of MICE agreement compared to MME, and relatively greater dominance of CIV over MRV at higher latitudes. The findings are crucial for climate predictability and adaptation strategies at stakeholderrelevant scales.
Nonlinear waves on circle networks with excitable nodes ; Nonlinear wave formation and propagation on a complex network with excitable node dynamics is of fundamental interest in diverse fields in science and engineering. Here, we propose a new model of the Kuramoto type to study nonlinear wave generation and propagation on circular subgraphs of a complex network. On circle networks, in the continuum limit, this model is equivalent to the overdamped FrenkelKontorova model. The new model is shown to keep the essential features of those wellknown models such as the diffusively coupled BarEiswirth model but with much simplified expression such that analytic analysis becomes possible. We classify traveling wave solutions on circle networks and show the universality of its features with perturbation analysis and numerical computation.
Mathematical modeling and numerical simulation of a bioreactor landfill using Feel ; In this paper, we propose a mathematical model to describe the functioning of a bioreactor landfill, that is a waste management facility in which biodegradable waste is used to generate methane. The simulation of a bioreactor landfill is a very complex multiphysics problem in which bacteria catalyze a chemical reaction that starting from organic carbon leads to the production of methane, carbon dioxide and water. The resulting model features a heat equation coupled with a nonlinear reaction equation describing the chemical phenomena under analysis and several advection and advectiondiffusion equations modeling multiphase flows inside a porous environment representing the biodegradable waste. A framework for the approximation of the model is implemented using Feel, a C opensource library to solve Partial Differential Equations. Some heuristic considerations on the quantitative values of the parameters in the model are discussed and preliminary numerical simulations are presented.
Automatic 3D modelling of craniofacial form ; Threedimensional models of craniofacial variation over the general population are useful for assessing pre and postoperative head shape when treating various craniofacial conditions, such as craniosynostosis. We present a new method of automatically building both sagittal profile models and full 3D surface models of the human head using a range of techniques in 3D surface image analysis; in particular, automatic facial landmarking using supervised machine learning, global and local symmetry plane detection using a variant of trimmed iterative closest points, locallyaffine template warping for full 3D models and a novel pose normalisation using robust iterative ellipse fitting. The PCAbased models built using the new pose normalisation are more compact than those using Generalised Procrustes Analysis and we demonstrate their utility in a clinical case study.
Higher spin six vertex model and symmetric rational functions ; We consider a fully inhomogeneous stochastic higher spin six vertex model in a quadrant. For this model we derive concise integral representations for multipoint qmoments of the height function and for the qcorrelation functions. At least in the case of the step initial condition, our formulas degenerate in appropriate limits to many known formulas of such type for integrable probabilistic systems in the 11d KPZ universality class, including the stochastic six vertex model, ASEP, various qTASEPs, and associated zero range processes. Our arguments are largely based on properties of a family of symmetric rational functions which can be defined as partition functions of the inhomogeneous higher spin six vertex model for suitable domains. In the homogeneous case, such functions were previously studied in httparxiv.orgabs1410.0976; they also generalize classical HallLittlewood and Schur polynomials. A key role is played by Cauchylike summation identities for these functions, which are obtained as a direct corollary of the YangBaxter equation for the higher spin six vertex model.
Atmospheric ionization induced by precipitating electrons Comparison of CRACEPII model with parametrization model ; A new model CRACEPII Cosmic Ray Atmospheric Cascade Electron Precipitation Induced Ionization is presented. The CRACEPII is based on Monte Carlo simulation of precipitating electrons propagation and interaction with matter in the Earth atmosphere. It explicitly considers energy deposit ionization, pair production, Compton scattering, generation of Bremsstrahlung high energy photons, photoionization and annihilation of positrons, multiple scattering as physical processes accordingly. The propagation of precipitating electrons and their interactions with atmospheric molecules is carried out with the GEANT4 simulation tool PLANETOCOSMICS code using NRLMSISE 00 atmospheric model. The ionization yields is compared with an analytical parametrization for various energies of incident precipitating electron, using a flux of monoenergetic particles. A good agreement between the two models is achieved. Subsequently, on the basis of balloonborn measured spectra of precipitating electrons at 30.10.2002 and 07.01.2004, the ion production rate in the middle and upper atmosphere is estimated using the CRACEPII model
Adiabatic Floquet model for the optical response in femtosecond filaments ; The standard model of femtosecond filamentation is based on phenomenological assumptions which suggest that the ionizationinduced carriers can be treated as free according to the Drude model, while the nonlinear response of the bound carriers follows the alloptical Kerr effect. Here, we demonstrate that the additional plasma generated at a multiphoton resonance dominates the saturation of the nonlinear refractive index. Since resonances are not captured by the standard model, we propose a modification of the latter in which ionization enhancements can be accounted for by an ionization rate obtained from nonHermitian Floquet theory. In the adiabatic regime of long pulse envelopes, this augmented standard model is in excellent agreement with direct quantum mechanical simulations. Since our proposal maintains the structure of the standard model, it can be easily incorporated into existing codes of filament simulation.
ANOVA model for network metaanalysis of diagnostic test accuracy data ; Network metaanalysis NMA allow combining efficacy information from multiple comparisons from trials assessing different therapeutic interventions for a given disease and to estimate unobserved comparisons from a network of observed comparisons. Applying NMA on diagnostic accuracy studies is a statistical challenge given the inherent correlation of sensitivity and specificity. A conceptually simple and novel hierarchical armbased AB model which expresses the logit transformed sensitivity and specificity as sum of fixed effects for test, correlated studyeffects and a random error associated with various tests evaluated in given study is proposed. We apply the model to previously published metaanalyses assessing the accuracy of diverse cytological and molecular tests used to triage women with minor cervical lesions to detect cervical precancer and the results compared with those from the contrastbased CB model which expresses the linear predictor as a contrast to a comparator test. The proposed AB model is more appealing than the CB model in that it yields the marginal means which are easily interpreted and makes use of all available data and easily accommodates more general variancecovariance matrix structures.