text
stringlengths
62
2.94k
CloudScan A configurationfree invoice analysis system using recurrent neural networks ; We present CloudScan; an invoice analysis system that requires zero configuration or upfront annotation. In contrast to previous work, CloudScan does not rely on templates of invoice layout, instead it learns a single global model of invoices that naturally generalizes to unseen invoice layouts. The model is trained using data automatically extracted from enduser provided feedback. This automatic training data extraction removes the requirement for users to annotate the data precisely. We describe a recurrent neural network model that can capture long range context and compare it to a baseline logistic regression model corresponding to the current CloudScan production system. We train and evaluate the system on 8 important fields using a dataset of 326,471 invoices. The recurrent neural network and baseline model achieve 0.891 and 0.887 average F1 scores respectively on seen invoice layouts. For the harder task of unseen invoice layouts, the recurrent neural network model outperforms the baseline with 0.840 average F1 compared to 0.788.
SesameStyle Decomposition of KSDFT Molecular Dynamics for Direct Interrogation of Nuclear Models ; A common paradigm used in the construction of equations of state is to decompose the thermodynamics into a superposition of three terms a staticlattice cold curve, a contribution from the thermal motion of the nuclei, and a contribution from the thermal excitation of the electrons. While statistical mechanical models for crystals provide tractable framework for the nuclear contribution in the solid phase, much less is understood about the nuclear contribution above the melt temperature Cvtextnucapprox 3R and how it should transition to the hightemperature limit Cvtextnuc sim frac32R. In this work, we describe an algorithm for extracting both the thermal nuclear and thermal electronic contributions from quantum molecular dynamics QMD. We then use the VASP QMD package to probe thermal nuclear behavior of liquid aluminum at normal density to compare the results to semiempirical models the Johnson generic model, the Chisolm hightemperature liquid model, and the CRIS model.
Model updating using sum of squares SOS optimization to minimize modal dynamic residuals ; This research studies finite element FE model updating through sum of squares SOS optimization to minimize modal dynamic residuals. In the past few decades, many FE model updating algorithms have been studied to improve the similitude between a numerical model and the asbuilt structure. FE model updating usually requires solving nonconvex optimization problems, while most offtheshelf optimization solvers can only find local optima. To improve the model updating performance, this paper proposes the SOS global optimization method for minimizing modal dynamic residuals of the generalized eigenvalue equations in structural dynamics. The proposed method is validated through both numerical simulation and experimental study of a fourstory shear frame structure.
Conceptual Modeling for Control of a Physical Engineering Plant A Case Study ; We examine the problem of weaknesses in frameworks of conceptual modeling for handling certain aspects of the system being modeled. We propose the use of a flowbased modeling methodology at the conceptual level. Specifically, and without loss of generality, we develop a conceptual description that can be used for controlling the maintenance of a physical system, and demonstrate it by applying it to an existing electrical power plant system. Recent studies reveal difficulties in finding comprehensive answers for monitoring operations and identifying risks as well as the fact that incomplete information can easily lead to incorrect maintenance. A unified framework for integrated conceptualization is therefore needed. The conceptual modeling approach integrates maintenance operations into a total system comprising humans, physical objects, and information. The proposed model is constructed of abstract machines of things connected by flows, forming an integrated whole. It represents a manmade, intentionally constructed system and includes technical and human things observable in the real world, exemplified by the study case described in this paper. A specification is constructed from a maximum of five basic operations creation, processing, releasing, transferring, and receiving.
Individualized Multidirectional Variable Selection ; In this paper we propose a heterogeneous modeling framework which achieves individualwise feature selection and individualized covariates' effects subgrouping simultaneously. In contrast to conventional model selection approaches, the new approach constructs a separation penalty with multidirectional shrinkages, which facilitates individualized modeling to distinguish strong signals from noisy ones and selects different relevant variables for different individuals. Meanwhile, the proposed model identifies subgroups among which individuals share similar covariates' effects, and thus improves individualized estimation efficiency and feature selection accuracy. Moreover, the proposed model also incorporates withinindividual correlation for longitudinal data to gain extra efficiency. We provide a general theoretical foundation under a doubledivergence modeling framework where the number of individuals and the number of individualwise measurements can both diverge, which enables inference on both an individual level and a population level. In particular, we establish strong oracle property for the individualized estimator to ensure its optimal large sample property under various conditions. An efficient ADMM algorithm is developed for computational scalability. Simulation studies and applications to posttrauma mental disorder analysis with genetic variation and an HIV longitudinal treatment study are illustrated to compare the new approach to existing methods.
Bifurcations in valveless pumping techniques from a coupled fluidstructureelectrophysiology model in heart development ; We explore an embryonic heart model that couples electrophysiology and muscleforce generation to flow induced using a 2D fluidstructure interaction framework based on the immersed boundary method. The propagation of action potentials are coupled to muscular contraction and hence the overall pumping dynamics. In comparison to previous models, the electrodynamical model does not use prescribed motion to initiate the pumping motion, but rather the pumping dynamics are fully coupled to an underlying electrophysiology model, governed by the FitzHughNagumo equations. Perturbing the diffusion parameter in the FitzHughNagumo model leads to a bifurcation in dynamics of action potential propagation. This bifurcation is able to capture a spectrum of different pumping regimes, with dynamic suction pumping and peristalticlike pumping at the extremes. We find that more bulk flow is produced within the realm of peristalticlike pumping.
Projective, Sparse, and Learnable Latent Position Network Models ; When modeling network data using a latent position model, it is typical to assume that the nodes' positions are independently and identically distributed. However, this assumption implies the average node degree grows linearly with the number of nodes, which is inappropriate when the graph is thought to be sparse. We propose an alternative assumption that the latent positions are generated according to a Poisson point process and show that it is compatible with various levels of sparsity. Unlike other notions of sparse latent position models in the literature, our framework also defines a projective sequence of probability models, thus ensuring consistency of statistical inference across networks of different sizes. We establish conditions for consistent estimation of the latent positions, and compare our results to existing frameworks for modeling sparse networks.
What we learn from learning Understanding capabilities and limitations of machine learning in botnet attacks ; With a growing increase in botnet attacks, computer networks are constantly under threat from attacks that cripple cyberinfrastructure. Detecting these attacks in realtime proves to be a difficult and resource intensive task. One of the pertinent methods to detect such attacks is signature based detection using machine learning models. This paper explores the efficacy of these models at detecting botnet attacks, using data captured from largescale network attacks. Our study provides a comprehensive overview of performance characteristics two machine learning models Random Forest and MultiLayer Perceptron Deep Learning in such attack scenarios. Using Big Data analytics, the study explores the advantages, limitations, modelfeature parameters, and overall performance of using machine learning in botnet attacks communication. With insights gained from the analysis, this work recommends algorithmsmodels for specific attacks of botnets instead of a generalized model.
Learning Gene Regulatory Networks with HighDimensional Heterogeneous Data ; The Gaussian graphical model is a widely used tool for learning gene regulatory networks with highdimensional gene expression data. Most existing methods for Gaussian graphical models assume that the data are homogeneous, i.e., all samples are drawn from a single Gaussian distribution. However, for many real problems, the data are heterogeneous, which may contain some subgroups or come from different resources. This paper proposes to model the heterogeneous data using a mixture Gaussian graphical model, and apply the imputationconsistency algorithm, combining with the psilearning algorithm, to estimate the parameters of the mixture model and cluster the samples to different subgroups. An integrated Gaussian graphical network is learned across the subgroups along with the iterations of the imputationconsistency algorithm. The proposed method is compared with an existing method for learning mixture Gaussian graphical models as well as a few other methods developed for homogeneous data, such as graphical Lasso, nodewise regression and psilearning. The numerical results indicate superiority of the proposed method in all aspects of parameter estimation, cluster identification and network construction. The numerical results also indicate generality of the proposed method it can be applied to homogeneous data without significant harms.
Cross Domain Regularization for Neural Ranking Models Using Adversarial Learning ; Unlike traditional learning to rank models that depend on handcrafted features, neural representation learning models learn higher level features for the ranking task by training on large datasets. Their ability to learn new features directly from the data, however, may come at a price. Without any special supervision, these models learn relationships that may hold only in the domain from which the training data is sampled, and generalize poorly to domains not observed during training. We study the effectiveness of adversarial learning as a cross domain regularizer in the context of the ranking task. We use an adversarial discriminator and train our neural ranking model on a small set of domains. The discriminator provides a negative feedback signal to discourage the model from learning domain specific representations. Our experiments show consistently better performance on held out domains in the presence of the adversarial discriminatorsometimes up to 30 on precision1.
Quadratic operator relations and Bethe equations for spin12 RichardsonGaudin models ; In this work we demonstrate how one can, in a generic approach, derive a set of N simple quadratic Bethe equations for integrable RichardsonGaudin RG models built out of N spins12. These equations depend only on the N eigenvalues of the various conserved charges so that any solution of these equations defines, indirectly through the corresponding set of eigenvalues, one particular eigenstate. The proposed construction covers the full class of integrable RG models of the XYZ including the subclasses of XXZ and XXX models type realised in terms of spins12, coupled with one another through sigmaix sigmajx , sigmaiy sigmajy , sigmaiz sigmajz terms, including, as well, magnetic fieldlike terms linear in the Pauli matrices. The approach exclusively requires integrability, defined here only by the requirement that N conserved charges Ri with i 1,2 dots N such that leftRi,Rjright 0 forall i,j exist . The result is therefore valid, and equally simple, for models with or without U1 symmetry, with or without a properly defined pseudovacuum as well as for models with nonskew symmetric couplings.
Bayesian forecasting of many countvalued time series ; This paper develops forecasting methodology and application of new classes of dynamic models for time series of nonnegative counts. Novel univariate models synthesise dynamic generalized linear models for binary and conditionally Poisson time series, with dynamic random effects for overdispersion. These models allow use of dynamic covariates in both binary and nonzero count components. Sequential Bayesian analysis allows fast, parallel analysis of sets of decoupled time series. New multivariate models then enable information sharing in contexts when data at a more highly aggregated level provide more incisive inferences on shared patterns such as trends and seasonality. A novel multiscale approach one new example of the concept of decouplerecouple in time series enables information sharing across series. This incorporates crossseries linkages while insulating parallel estimation of univariate models, hence enables scalability in the number of series. The major motivating context is supermarket sales forecasting. Detailed examples drawn from a case study in multistep forecasting of sales of a number of related items showcase forecasting of multiple series, with discussion of forecast accuracy metrics and broader questions of probabilistic forecast accuracy assessment.
Beyond Structural Causal Models Causal Constraints Models ; Structural Causal Models SCMs provide a popular causal modeling framework. In this work, we show that SCMs are not flexible enough to give a complete causal representation of dynamical systems at equilibrium. Instead, we propose a generalization of the notion of an SCM, that we call Causal Constraints Model CCM, and prove that CCMs do capture the causal semantics of such systems. We show how CCMs can be constructed from differential equations and initial conditions and we illustrate our ideas further on a simple but ubiquitous biochemical reaction. Our framework also allows to model functional laws, such as the ideal gas law, in a sensible and intuitive way.
Tracking State Changes in Procedural Text A Challenge Dataset and Models for Process Paragraph Comprehension ; We present a new dataset and models for comprehending paragraphs about processes e.g., photosynthesis, an important genre of text describing a dynamic world. The new dataset, ProPara, is the first to contain natural rather than machinegenerated text about a changing world along with a full annotation of entity states location and existence during those changes 81k datapoints. The endtask, tracking the location and existence of entities through the text, is challenging because the causal effects of actions are often implicit and need to be inferred. We find that previous models that have worked well on synthetic data achieve only mediocre performance on ProPara, and introduce two new neural models that exploit alternative mechanisms for state prediction, in particular using LSTM input encoding and span prediction. The new models improve accuracy by up to 19. The dataset and models are available to the community at httpdata.allenai.orgpropara.
Variational Inference for DataEfficient Model Learning in POMDPs ; Partially observable Markov decision processes POMDPs are a powerful abstraction for tasks that require decision making under uncertainty, and capture a wide range of real world tasks. Today, effective planning approaches exist that generate effective strategies given blackbox models of a POMDP task. Yet, an open question is how to acquire accurate models for complex domains. In this paper we propose DELIP, an approach to model learning for POMDPs that utilizes amortized structured variational inference. We empirically show that our model leads to effective control strategies when coupled with stateoftheart planners. Intuitively, modelbased approaches should be particularly beneficial in environments with changing reward structures, or where rewards are initially unknown. Our experiments confirm that DELIP is particularly effective in this setting.
Equilibrium Restrictions and Approximate Models With an application to Pricing Macroeconomic Risk ; We propose a method that reconciles two popular approaches to structural estimation and inference Using a complete yet approximate model versus imposing a set of credible behavioral conditions. This is done by distorting the approximate model to satisfy these conditions. We provide the asymptotic theory and Monte Carlo evidence, and illustrate that counterfactual experiments are possible. We apply the methodology to the model of long run risks in aggregate consumption Bansal and Yaron, 2004, where the complete model is generated using the Campbell and Shiller 1988 approximation. Using US data, we investigate the empirical importance of the neglected nonlinearity. We find that distorting the model to satisfy the nonlinear equilibrium condition is strongly preferred by the data while the quality of the approximation is yet another reason for the downward bias to estimates of the intertemporal elasticity of substitution and the upward bias in risk aversion.
RetrainingBased Iterative Weight Quantization for Deep Neural Networks ; Model compression has gained a lot of attention due to its ability to reduce hardware resource requirements significantly while maintaining accuracy of DNNs. Model compression is especially useful for memoryintensive recurrent neural networks because smaller memory footprint is crucial not only for reducing storage requirement but also for fast inference operations. Quantization is known to be an effective model compression method and researchers are interested in minimizing the number of bits to represent parameters. In this work, we introduce an iterative technique to apply quantization, presenting high compression ratio without any modifications to the training algorithm. In the proposed technique, weight quantization is followed by retraining the model with full precision weights. We show that iterative retraining generates new sets of weights which can be quantized with decreasing quantization loss at each iteration. We also show that quantization is efficiently able to leverage pruning, another effective model compression method. Implementation issues on combining the two methods are also addressed. Our experimental results demonstrate that an LSTM model using 1bit quantized weights is sufficient for PTB dataset without any accuracy degradation while previous methods demand at least 24 bits for quantized weights.
PoissonLie analogues of spin Sutherland models ; We present generalizations of the wellknown trigonometric spin Sutherland models, which were derived by Hamiltonian reduction of free motion' on cotangent bundles of compact simple Lie groups based on the conjugation action. Our models result by reducing the corresponding Heisenberg doubles with the aid of a PoissonLie analogue of the conjugation action. We describe the reduced symplectic structure and show that the reduced main Hamiltonians' reproduce the spin Sutherland model by keeping only their leading terms. The solutions of the equations of motion emerge from geodesics on the compact Lie group via the standard projection method and possess many first integrals. Similar hyperbolic spin RuijsenaarsSchneider type models were obtained previously by L.C. Li using a different method, based on coboundary dynamical Poisson groupoids, but their relation with spin Sutherland models was not discussed.
Noise Contrastive Estimation and Negative Sampling for Conditional Models Consistency and Statistical Efficiency ; Noise Contrastive Estimation NCE is a powerful parameter estimation method for loglinear models, which avoids calculation of the partition function or its derivatives at each training step, a computationally demanding step in many cases. It is closely related to negative sampling methods, now widely used in NLP. This paper considers NCEbased estimation of conditional models. Conditional models are frequently encountered in practice; however there has not been a rigorous theoretical analysis of NCE in this setting, and we will argue there are subtle but important questions when generalizing NCE to the conditional case. In particular, we analyze two variants of NCE for conditional models one based on a classification objective, the other based on a ranking objective. We show that the rankingbased variant of NCE gives consistent parameter estimates under weaker assumptions than the classificationbased method; we analyze the statistical efficiency of the rankingbased and classificationbased variants of NCE; finally we describe experiments on synthetic data and language modeling showing the effectiveness and tradeoffs of both methods.
Spectral gap critical exponent for Glauber dynamics of hierarchical spin models ; We develop a renormalisation group approach to deriving the asymptotics of the spectral gap of the generator of Glauber type dynamics of spin systems with strong correlations at and near a critical point. In our approach, we derive a spectral gap inequality for the measure recursively in terms of spectral gap inequalities for a sequence of renormalised measures. We apply our method to hierarchical versions of the 4dimensional ncomponent varphi4 model at the critical point and its approach from the high temperature side, and of the 2dimensional SineGordon and the Discrete Gaussian models in the rough phase KosterlitzThouless phase. For these models, we show that the spectral gap decays polynomially like the spectral gap of the dynamics of a free field with a logarithmic correction for the varphi4 model, the scaling limit of these models in equilibrium.
Fairness Through Causal Awareness Learning LatentVariable Models for Biased Data ; How do we learn from biased data Historical datasets often reflect historical prejudices; sensitive or protected attributes may affect the observed treatments and outcomes. Classification algorithms tasked with predicting outcomes accurately from these datasets tend to replicate these biases. We advocate a causal modeling approach to learning from biased data, exploring the relationship between fair classification and intervention. We propose a causal model in which the sensitive attribute confounds both the treatment and the outcome. Building on prior work in deep learning and generative modeling, we describe how to learn the parameters of this causal model from observational data alone, even in the presence of unobserved confounders. We show experimentally that fairnessaware causal modeling provides better estimates of the causal effects between the sensitive attribute, the treatment, and the outcome. We further present evidence that estimating these causal effects can help learn policies that are both more accurate and fair, when presented with a historically biased dataset.
Testing a quintessence model with Yukawa interaction from cosmological observations and Nbody simulations ; We consider a quintessence model with Yukawa interaction between dark energy and dark matter and constrain this model by employing the recent cosmological data including the updated cosmic microwave background CMB measurements from Planck 2015, the weak gravitational lensing measurements from Kilo Degree Survey KiDS and redshiftspace distortions. We find that an interaction in the dark sector is compatible with observations. The updated Planck data can significantly improve the constraints compared with the previous results from Planck 2013, while the KiDS data has less constraining power than Planck. The Yukawa interaction model is found to be moderately favored by Planck and able to alleviate the discordance between weak lensing measurements and CMB measurements as previously inferred from the standard Lambda cold dark matter model. Nbody simulations for Yukawa interaction model is also performed. We find that using the halo density profile is plausible to improve the constraints significantly in the future.
Hyperbolic normal stochastic volatility model ; For option pricing models and heavytailed distributions, this study proposes a continuoustime stochastic volatility model based on an arithmetic Brownian motion a oneparameter extension of the normal stochastic alphabetarho SABR model. Using two generalized Bougerol's identities in the literature, the study shows that our model has a closedform MonteCarlo simulation scheme and that the transition probability for one special case follows Johnson's SU distributiona popular heavytailed distribution originally proposed without stochastic process. It is argued that the SU distribution serves as an analytically superior alternative to the normal SABR model because the two distributions are empirically similar.
Locally Doptimal Designs for a Wider Class of Nonlinear Models on the kdimensional Ball with applications to logit and probit models ; In this paper we extend the results of Radloff and Schwabe 2018, which could be applied for example to Poisson regression, negative binomial regression and proportional hazard models with censoring, to a wider class of nonlinear multiple regression models. This includes the binary response models with logit and probit link besides other. For this class of models we derive locally Doptimal designs when the design region is a kdimensional ball. For the corresponding construction we make use of the concept of invariance and equivariance in the context of optimal designs as in our previous paper. In contrast to the former results the designs will not necessarily be exact designs in all cases. Instead approximate designs can appear. These results can be generalized to arbitrary ellipsoidal design regions.
Characterization of Biologically Relevant Network Structures form Timeseries Data ; Highthroughput data acquisition in synthetic biology leads to an abundance of data that need to be processed and aggregated into useful biological models. Building dynamical models based on this wealth of data is of paramount importance to understand and optimize designs of synthetic biology constructs. However, building models manually for each data set is inconvenient and might become infeasible for highly complex synthetic systems. In this paper, we present stateoftheart system identification techniques and combine them with chemical reaction network theory CRNT to generate dynamic models automatically. On the system identification side, Sparse Bayesian Learning offers methods to learn from data the sparsest set of dictionary functions necessary to capture the dynamics of the system into ODE models; on the CRNT side, building on such sparse ODE models, all possible network structures within a given parameter uncertainty region can be computed. Additionally, the system identification process can be complemented with constraints on the parameters to, for example, enforce stability or nonnegativitythus offering relevant physical constraints over the possible network structures. In this way, the wealth of data can be translated into biologically relevant network structures, which then steers the data acquisition, thereby providing a vital step for closedloop system identification.
Learning and Planning with a Semantic Model ; Building deep reinforcement learning agents that can generalize and adapt to unseen environments remains a fundamental challenge for AI. This paper describes progresses on this challenge in the context of manmade environments, which are visually diverse but contain intrinsic semantic regularities. We propose a hybrid modelbased and modelfree approach, LEArning and Planning with Semantics LEAPS, consisting of a multitarget subpolicy that acts on visual inputs, and a Bayesian model over semantic structures. When placed in an unseen environment, the agent plans with the semantic model to make highlevel decisions, proposes the next subtarget for the subpolicy to execute, and updates the semantic model based on new observations. We perform experiments in visual navigation tasks using House3D, a 3D environment that contains diverse humandesigned indoor scenes with realworld objects. LEAPS outperforms strong baselines that do not explicitly plan using the semantic content.
Categorical Semantics for Time Travel ; We introduce a general categorical framework to reason about quantum theory and other process theories living in spacetimes where Closed Timelike Curves CTCs are available, allowing resources to travel back in time and provide computational speedups. Our framework is based on a weakening of the definition of traced symmetric monoidal categories, obtained by dropping the yanking axiom and the requirement that the trace be defined on all morphisms. We show that the two leading models for quantum theory with closed timelike curvesnamely the PCTC model of Lloyd et al. and the DCTC model of Deutschare captured by our framework, and in doing so we provide the first compositional description of the DCTC model. Our description of the DCTC model results in a process theory which respects the constraints of relativistic causality this is in direct contrast to the PCTC model, where CTCs are implemented by a trace and allow postselection to be performed deterministically.
Joint Entity Linking with Deep Reinforcement Learning ; Entity linking is the task of aligning mentions to corresponding entities in a given knowledge base. Previous studies have highlighted the necessity for entity linking systems to capture the global coherence. However, there are two common weaknesses in previous global models. First, most of them calculate the pairwise scores between all candidate entities and select the most relevant group of entities as the final result. In this process, the consistency among wrong entities as well as that among right ones are involved, which may introduce noise data and increase the model complexity. Second, the cues of previously disambiguated entities, which could contribute to the disambiguation of the subsequent mentions, are usually ignored by previous models. To address these problems, we convert the global linking into a sequence decision problem and propose a reinforcement learning model which makes decisions from a global perspective. Our model makes full use of the previous referred entities and explores the longterm influence of current selection on subsequent decisions. We conduct experiments on different types of datasets, the results show that our model outperforms stateoftheart systems and has better generalization performance.
A general model for dynamic contact angle over full speed regime ; The prevailing models for advancing dynamic contact angle are under intensive debates, and the fitting performances are far from satisfying in practice. The present study proposes a model based on the recent understanding of the multiscale structure and the local friction at the contact line. The model has unprecedented fitting performance for dynamic contact angle over wide spreading speed regime across more than five orders of magnitude. The model also well applies for nonmonotonous angle variations which have long been considered as abnormal. The model has three fitting parameters, and two of them are nearly predictable at this stage. The one denoting the multiscale ratio was nearly constant and the one representing the primary frictional coefficient, has a simple correlation with the liquid bulk viscosity.
Reconstructing Gravity on Cosmological Scales ; We present the datadriven reconstruction of gravitational theories and Dark Energy models on cosmological scales. We showcase the power of present cosmological probes at constraining these models and quantify the knowledge of their properties that can be acquired through state of the art data. This reconstruction exploits the power of the Effective Field Theory approach to Dark Energy and Modified Gravity phenomenology, which compresses the freedom in defining such models into a finite set of functions that can be reconstructed across cosmic times using cosmological data. We consider several model classes described within this framework and thoroughly discuss their phenomenology and data implications. We find that some models can alleviate the present discrepancy in the determination of the Hubble constant as inferred from the cosmic microwave background and as directly measured. This results in a statistically significant preference for the reconstructed theories over the standard cosmological model.
Fooling Neural Network Interpretations via Adversarial Model Manipulation ; We ask whether the neural network interpretation methods can be fooled via adversarial model manipulation, which is defined as a model finetuning step that aims to radically alter the explanations without hurting the accuracy of the original models, e.g., VGG19, ResNet50, and DenseNet121. By incorporating the interpretation results directly in the penalty term of the objective function for finetuning, we show that the stateoftheart saliency map based interpreters, e.g., LRP, GradCAM, and SimpleGrad, can be easily fooled with our model manipulation. We propose two types of fooling, Passive and Active, and demonstrate such foolings generalize well to the entire validation set as well as transfer to other interpretation methods. Our results are validated by both visually showing the fooled explanations and reporting quantitative metrics that measure the deviations from the original explanations. We claim that the stability of neural network interpretation method with respect to our adversarial model manipulation is an important criterion to check for developing robust and reliable neural network interpretation method.
The fluid limit of a random graph model for a shared ledger ; A shared ledger is a record of transactions that can be updated by any member of a group of users. The notion of independent and consistent recordkeeping in a shared ledger is important for blockchain and more generally for distributed ledger technologies. In this paper we analyze the growth of a model for the tangle, which is the shared ledger protocol used as the basis for the IOTA cryptocurrency. The model is a random directed acyclic graph, and its growth is described by a nonMarkovian stochastic process. We derive a delay differential equation for the fluid model which describes the tangle at high arrival rate. We prove convergence in probability of the tangle process to the fluid model, and also prove global stability of the fluid model. The convergence proof relies on martingale techniques.
The MFA ground states for the extended BoseHubbard model with a threebody constraint ; We address the intensively studied extended bosonic Hubbard model EBHM with truncation of the onsite Hilbert space to the three lowest occupation states n0,1,2 in frames of the S1 pseudospin formalism. Similar model was recently proposed to describe the charge degree of freedom in a model highTc cuprate with the onsite Hilbert space reduced to the three effective valence centers, nominally Cu1;2;3 . With small corrections the model becomes equivalent to a strongly anisotropic S1 quantum magnet in an external magnetic field. We have applied a generalized meanfield approach and quantum MonteCarlo technique for the model 2D S1 system with a twoparticle transport to find the ground state phase with its evolution under deviation from halffilling.
Vison Crystals in an Extended Kitaev Model on the Honeycomb Lattice ; We introduce an extension of the Kitaev honeycomb model by including fourspin interactions that preserve the local gauge structure and hence the integrability of the original model. The extended model has a rich phase diagram containing five distinct vison crystals, as well as a symmetric piflux spin liquid with a Fermi surface of Majorana fermions and a sequence of Lifshitz transitions. We discuss possible experimental signatures and, in particular, present finitetemperature Monte Carlo calculations of the specific heat and the static vison structure factor. We argue that our extended model emerges naturally from generic perturbations to the Kitaev honeycomb model.
Center of mass velocity from Mass Polariton solution to AbrahamMinkowski controversy ; A onehundredyear old controversy, the AbrahamMinkowski controversy, concerns two widely disparate predictions for the momentum of light in glass. In the Abraham case the photon momentum is inversely proportional to the index of refraction while the photon momentum predicted by Minkowski formulation is proportional to the index of refraction. In spite of significant theoretical and experimental effort neither formulation could convincingly be shown to be correct. A new model, called the Mass Polariton model, MP, was recently introduced that resolves the contradictions in the Abraham and Minkowski formulations and experimental results. Both the MP and Minkowski model predict the same momentum change for a photon entering glass from a vacuum, yet the Minkowski model allows violation of the center of mass theorem. This paper shows that the MP model, in spite of predicting the same momentum change as the Minkowski model, obeys the center of mass theorem.
Probabilistic Neuralsymbolic Models for Interpretable Visual Question Answering ; We propose a new class of probabilistic neuralsymbolic models, that have symbolic functional programs as a latent, stochastic variable. Instantiated in the context of visual question answering, our probabilistic formulation offers two key conceptual advantages over prior neuralsymbolic models for VQA. Firstly, the programs generated by our model are more understandable while requiring lesser number of teaching examples. Secondly, we show that one can pose counterfactual scenarios to the model, to probe its beliefs on the programs that could lead to a specified answer given an image. Our results on the CLEVR and SHAPES datasets verify our hypotheses, showing that the model gets better program and answer prediction accuracy even in the low data regime, and allows one to probe the coherence and consistency of reasoning performed.
Phasefield modeling of fracture for quasibrittle materials ; This paper addresses the modeling of fracture in quasibrittle materials using a phasefield approach to the description of crack topology. Within the computational mechanics community, several studies have treated the issue of modeling fracture using phase fields. Most of these studies have used an approach that implies the lack of a damage threshold. We herein explore an alternative model that includes a damage threshold and study how it compares with the most popular approach. The formulation is systematically explained within a rigorous variational framework. Subsequently, we present the corresponding threedimensional finite element discretization that leads to a straightforward numerical implementation. Benchmark simulations in two dimensions and three dimensions are then presented. The results show that while an elastic stage and a damage threshold are ensured by the present model, good agreement with the results reported in the literature can be obtained, where such features are generally absent.
BERT for Joint Intent Classification and Slot Filling ; Intent classification and slot filling are two essential tasks for natural language understanding. They often suffer from smallscale humanlabeled training data, resulting in poor generalization capability, especially for rare words. Recently a new language representation model, BERT Bidirectional Encoder Representations from Transformers, facilitates pretraining deep bidirectional representations on largescale unlabeled corpora, and has created stateoftheart models for a wide variety of natural language processing tasks after simple finetuning. However, there has not been much effort on exploring BERT for natural language understanding. In this work, we propose a joint intent classification and slot filling model based on BERT. Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentencelevel semantic frame accuracy on several public benchmark datasets, compared to the attentionbased recurrent neural network models and slotgated models.
Composite Fading Models based on Inverse Gamma Shadowing Theory and Validation ; We introduce a general approach to characterize composite fading models based on inverse gamma IG shadowing. We first determine to what extent the IG distribution is an adequate choice for modeling shadow fading, by means of a comprehensive test with field measurements and other distributions conventionally used for this purpose. Then, we prove that the probability density function and cumulative distribution function of any IGbased composite fading model are directly expressed in terms of a Laplacedomain statistic of the underlying fast fading model and, in some relevant cases, as a mixture of wellknown stateoftheart distributions. Also, exact and asymptotic expressions for the outage probability are provided, which are valid for any choice of baseline fading distribution. Finally, we exemplify our approach by presenting several application examples for IGbased composite fading models, for which their statistical characterization is directly obtained in a simple form.
Numerical study of condensation in a Fermilike model of conterflowing particles via Gini coefficient ; The collective motion of selfdriven particles shows interesting novel phenomena such as swarming and the emergence of patterns. We have recently proposed a model for counterflowing particles that captures this idea and exhibits clogging transitions. This model is based on a generalization of the FermiDirac statistics wherein the maximal occupation of a cell is used. Here we present a detailed study comparing synchronous and asynchronous stochastic dynamics within this model. We show that an asynchronous updating scheme supports the mobileclogging transition and eliminates some mobility anomalies that are present in synchronous Monte Carlo simulations. Moreover, we show that this transition is dependent upon its initial conditions. Although the Gini coefficient was originally used to model wealth inequalities, we show that it is also efficient for studying the mobileclogging transition. Finally, we compare our stochastic simulation with direct numerical integration of partial differential equations used to describe this model.
Cosmological data favor Galileon ghost condensate over CDM ; We place observational constraints on the Galileon ghost condensate model, a dark energy proposal in cubicorder Horndeski theories consistent with the gravitationalwave event GW170817. The model extends the covariant Galileon by taking an additional higherorder field derivative X2 into account. This allows for the dark energy equation of state wrm DE to access the region 2wrm DE1 without ghosts. Indeed, this peculiar evolution of wrm DE is favored over that of the cosmological constant Lambda from the joint data analysis of cosmic microwave background CMB radiation, baryonic acoustic oscillations BAOs, supernovae type Ia SNIa and redshiftspace distortions RSDs. Furthermore, our model exhibits a better compatibility with the CMB data over the Lambdacolddarkmatter LambdaCDM model by suppressing largescale temperature anisotropies. The CMB temperature and polarization data lead to an estimation for today's Hubble parameter H0 consistent with its direct measurements at 2sigma. We perform a model selection analysis by using several methods and find a statistically significant preference of the Galileon ghost condensate model over LambdaCDM.
Singletdoublettriplet dark matter and neutrino masses ; In these proceedings, we present a study of a combined singletdoublet fermion and triplet scalar model for dark matter DM. Together, these models form a simple extension of the Standard Model SM that can account for DM and explain the existence of neutrino masses, which are generated radiatively. However, this also implies the existence of lepton flavour violating LFV processes. In addition, this particular model allows for gauge coupling unification. The new fields are odd under a new mathbbZ2 symmetry to stabilise the DM candidate. We analyse the DM, neutrino mass and LFV aspects, exploring the viable parameter space of the model. This is done using a numerical random scan imposing successively the neutrino mass and mixing, relic density, Higgs mass, direct detection, collider and LFV constraints. We find that DM in this model is fermionic for masses below about 1 TeV and scalar above. We observe a high degree of complementarity between direct detection and LFV experiments, which should soon allow to fully probe the fermionic DM sector and at least partially the scalar DM sector.
EnsembleNet EndtoEnd Optimization of Multiheaded Models ; Ensembling is a universally useful approach to boost the performance of machine learning models. However, individual models in an ensemble were traditionally trained independently in separate stages without information access about the overall ensemble. Many codistillation approaches were proposed in order to treat model ensembling as firstclass citizens. In this paper, we reveal a deeper connection between ensembling and distillation, and come up with a simpler yet more effective codistillation architecture. On largescale datasets including ImageNet, YouTube8M, and Kinetics, we demonstrate a general procedure that can convert a single deep neural network to a multiheaded model that has not only a smaller size but also better performance. The model can be optimized endtoend with our proposed codistillation loss in a single stage without human intervention.
Fast computation of loudness using a deep neural network ; The present paper introduces a deep neural network DNN for predicting the instantaneous loudness of a sound from its time waveform. The DNN was trained using the output of a more complex model, called the Cambridge loudness model. While a modern PC can perform a few hundred loudness computations per second using the Cambridge loudness model, it can perform more than 100,000 per second using the DNN, allowing realtime calculation of loudness. The rootmeansquare deviation between the predictions of instantaneous loudness level using the two models was less than 0.5 phon for unseen types of sound. We think that the general approach of simulating a complex perceptual model by a much faster DNN can be applied to other perceptual models to make them run in real time.
Bivariate BetaLSTM ; Long ShortTerm Memory LSTM infers the long term dependency through a cell state maintained by the input and the forget gate structures, which models a gate output as a value in 0,1 through a sigmoid function. However, due to the graduality of the sigmoid function, the sigmoid gate is not flexible in representing multimodality or skewness. Besides, the previous models lack modeling on the correlation between the gates, which would be a new method to adopt inductive bias for a relationship between previous and current input. This paper proposes a new gate structure with the bivariate Beta distribution. The proposed gate structure enables probabilistic modeling on the gates within the LSTM cell so that the modelers can customize the cell state flow with priors and distributions. Moreover, we theoretically show the higher upper bound of the gradient compared to the sigmoid function, and we empirically observed that the bivariate Beta distribution gate structure provides higher gradient values in training. We demonstrate the effectiveness of bivariate Beta gate structure on the sentence classification, image classification, polyphonic music modeling, and image caption generation.
Interactive Differentiable Simulation ; Intelligent agents need a physical understanding of the world to predict the impact of their actions in the future. While learningbased models of the environment dynamics have contributed to significant improvements in sample efficiency compared to modelfree reinforcement learning algorithms, they typically fail to generalize to system states beyond the training data, while often grounding their predictions on noninterpretable latent variables. We introduce Interactive Differentiable Simulation IDS, a differentiable physics engine, that allows for efficient, accurate inference of physical properties of rigidbody systems. Integrated into deep learning architectures, our model is able to accomplish system identification using visual input, leading to an interpretable model of the world whose parameters have physical meaning. We present experiments showing automatic taskbased robot design and parameter estimation for nonlinear dynamical systems by automatically calculating gradients in IDS. When integrated into an adaptive modelpredictive control algorithm, our approach exhibits orders of magnitude improvements in sample efficiency over modelfree reinforcement learning algorithms on challenging nonlinear control domains.
Rotationinvariant Mixed Graphical Model Network for 2D Hand Pose Estimation ; In this paper, we propose a new architecture named Rotationinvariant Mixed Graphical Model Network RMGMN to solve the problem of 2D hand pose estimation from a monocular RGB image. By integrating a rotation net, the RMGMN is invariant to rotations of the hand in the image. It also has a pool of graphical models, from which a combination of graphical models could be selected, conditioning on the input image. Belief propagation is performed on each graphical model separately, generating a set of marginal distributions, which are taken as the confidence maps of hand keypoint positions. Final confidence maps are obtained by aggregating these confidence maps together. We evaluate the RMGMN on two public hand pose datasets. Experiment results show our model outperforms the stateoftheart algorithm which is widely used in 2D hand pose estimation by a noticeable margin.
Learning CHARME models with neural networks ; In this paper, we consider a model called CHARME Conditional Heteroscedastic Autoregressive Mixture of Experts, a class of generalized mixture of nonlinear nonparametric ARARCH time series. Under certain Lipschitztype conditions on the autoregressive and volatility functions, we prove that this model is stationary, ergodic and tauweakly dependent. These conditions are much weaker than those presented in the literature that treats this model. Moreover, this result forms the theoretical basis for deriving an asymptotic theory of the underlying nonparametric estimation, which we present for this model. As an application, from the universal approximation property of neural networks NN, we develop a learning theory for the NNbased autoregressive functions of the model, where the strong consistency and asymptotic normality of the considered estimator of the NN weights and biases are guaranteed under weak conditions.
Towards explainable metalearning ; Metalearning is a field that aims at discovering how different machine learning algorithms perform on a wide range of predictive tasks. Such knowledge speeds up the hyperparameter tuning or feature engineering. With the use of surrogate models various aspects of the predictive task such as metafeatures, landmarker models e.t.c. are used to predict the expected performance. State of the art approaches are focused on searching for the best metamodel but do not explain how these different aspects contribute to its performance. However, to build a new generation of metamodels we need a deeper understanding of the importance and effect of metafeatures on the model tunability. In this paper, we propose techniques developed for eXplainable Artificial Intelligence XAI to examine and extract knowledge from blackbox surrogate models. To our knowledge, this is the first paper that shows how posthoc explainability can be used to improve the metalearning.
Recreating Bat Behavior on Quadrotor UAVsA Simulation Approach ; We develop an effective computer model to simulate sensing environments that consist of natural trees. The simulated environments are random and contain full geometry of the tree foliage. While this simulated model can be used as a general platform for studying the sensing mechanism of different flying species, our ultimate goal is to build batinspired Quadrotor UAVs UAVs that can recreate bat's flying behavior e.g., obstacle avoidance, path planning in dense vegetation. To this end, we also introduce an foliage echo simulator that can produce simulated echoes by mimicking bat's biosonar. In our current model, a few realistic model choices or assumptions are made. First, in order to create natural looking trees, the branching structures of trees are modeled by Lsystems, whereas the detailed geometry of branches, subbranches and leaves is created by randomizing a reference tree in a CAD object file. Additionally, the foliage echo simulator is simplified so that no shading effect is considered. We demonstrate our developed model by simulating realworld scenarios with multiple trees and compute the corresponding impulse responses along a Quadrotor trajectory.
Estimation of 1S production in ep process near threshold ; The nearthreshold photoproductions of heavy quarkonia are important ways to test the QCDinspired models and to constrain the gluon distribution of nucleon in the large x region. Investigating the various models, we choose a photongluon fusion model and a pomeron exchange model for JPsi photoproduction near threshold, emphasising on the explanation of the recent experimental measurement by GlueX at JLab. We find that these two models are not only valid in a wide range of the centerofmass energy of gamma and proton, but also can be generalized to describe the Upsilon1S photoproduction. Using these two models, we predict the electroproduction crosssections of Upsilon1S at EicC to be 48 fb to 85 fb at the centerofmass energy of 16.75 GeV.
PowerExpectedPosterior Priors as Mixtures of gPriors ; One of the main approaches used to construct prior distributions for objective Bayes methods is the concept of random imaginary observations. Under this setup, the expectedposterior prior EPP offers several advantages, among which it has a nice and simple interpretation and provides an effective way to establish compatibility of priors among models. In this paper, we study the powerexpected posterior prior as a generalization to the EPP in objective Bayesian model selection under normal linear models. We prove that it can be represented as a mixture of gprior, like a wide range of prior distributions under normal linear models, and thus posterior distributions and Bayes factors are derived in closed form, keeping therefore computational tractability. Comparisons with other mixtures of gprior are made and emphasis is given in the posterior distribution of g and its effect on Bayesian model selection and model averaging.
Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows ; Time series forecasting is often fundamental to scientific and engineering problems and enables decision making. With ever increasing data set sizes, a trivial solution to scale up predictions is to assume independence between interacting time series. However, modeling statistical dependencies can improve accuracy and enable analysis of interaction effects. Deep learning methods are well suited for this problem, but multivariate models often assume a simple parametric distribution and do not scale to high dimensions. In this work we model the multivariate temporal dynamics of time series via an autoregressive deep learning model, where the data distribution is represented by a conditioned normalizing flow. This combination retains the power of autoregressive models, such as good performance in extrapolation into the future, with the flexibility of flows as a general purpose highdimensional distribution model, while remaining computationally tractable. We show that it improves over the stateoftheart for standard metrics on many realworld data sets with several thousand interacting timeseries.
A Distributionally Robust Area Under Curve Maximization Model ; Area under ROC curve AUC is a widely used performance measure for classification models. We propose two new distributionally robust AUC maximization models DRAUC that rely on the Kantorovich metric and approximate the AUC with the hinge loss function. We consider the two cases with respectively fixed and variable support for the worstcase distribution. We use duality theory to reformulate the DRAUC models and derive tractable convex optimization problems. The numerical experiments show that the proposed DRAUC models benchmarked with the standard deterministic AUC and the support vector machine models perform better in general and in particular improve the worstcase outofsample performance over the majority of the considered datasets, thereby showing their robustness. The results are particularly encouraging since our numerical experiments are conducted with training sets of small size which have been known to be conducive to low outofsample performance.
SelfEnhanced GNN Improving Graph Neural Networks Using Model Outputs ; Graph neural networks GNNs have received much attention recently because of their excellent performance on graphbased tasks. However, existing research on GNNs focuses on designing more effective models without considering much about the quality of the input data. In this paper, we propose selfenhanced GNN SEG, which improves the quality of the input data using the outputs of existing GNN models for better performance on semisupervised node classification. As graph data consist of both topology and node labels, we improve input data quality from both perspectives. For topology, we observe that higher classification accuracy can be achieved when the ratio of interclass edges connecting nodes from different classes is low and propose topology update to remove interclass edges and add intraclass edges. For node labels, we propose training node augmentation, which enlarges the training set using the labels predicted by existing GNN models. SEG is a general framework that can be easily combined with existing GNN models. Experimental results validate that SEG consistently improves the performance of wellknown GNN models such as GCN, GAT and SGC across different datasets.
Do MultiHop Question Answering Systems Know How to Answer the SingleHop SubQuestions ; Multihop question answering QA requires a model to retrieve and integrate information from different parts of a long text to answer a question. Humans answer this kind of complex questions via a divideandconquer approach. In this paper, we investigate whether topperforming models for multihop questions understand the underlying subquestions like humans. We adopt a neural decomposition model to generate subquestions for a multihop complex question, followed by extracting the corresponding subanswers. We show that multiple stateoftheart multihop QA models fail to correctly answer a large portion of subquestions, although their corresponding multihop questions are correctly answered. This indicates that these models manage to answer the multihop questions using some partial clues, instead of truly understanding the reasoning paths. We also propose a new model which significantly improves the performance on answering the subquestions. Our work takes a step forward towards building a more explainable multihop QA system.
Photoinduced pairing in the Kondo lattice model ; The previous theoretical study has shown that pulse irradiation to the Mott insulating state in the Hubbard model can induce the enhancement of superconducting correlation due to the generation of eta pairs. Here, we show that the same mechanism can be applied to the Kondo lattice model, an effective model for heavy electron systems, by demonstrating that the pulse irradiation indeed enhances the etapairing correlation. As in the case of the Hubbard model, the nonlinear optical process is essential to increase the number of photoinduced eta pairs and thus the enhancement of the superconducting correlation. We also find the diffusive behavior of the spin dynamics after the pulse irradiation, suggesting that the increase of the number of eta pairs leads to the decoupling between the conduction band and the localized spins in the Kondo lattice model, which is inseparably related to the photodoping effect.
Improving forecasting performance using covariatedependent copula models ; Copulas provide an attractive approach for constructing multivariate distributions with flexible marginal distributions and different forms of dependences. Of particular importance in many areas is the possibility of explicitly forecasting the taildependences. Most of the available approaches are only able to estimate taildependences and correlations via nuisance parameters, but can neither be used for interpretation, nor for forecasting. Aiming to improve copula forecasting performance, we propose a general Bayesian approach for modeling and forecasting taildependences and correlations as explicit functions of covariates. The proposed covariatedependent copula model also allows for Bayesian variable selection among covariates from the marginal models as well as the copula density. The copulas we study include JoeClayton copula, Clayton copula, Gumbel copula and Student's emphtcopula. Posterior inference is carried out using an efficient MCMC simulation method. Our approach is applied to both simulated data and the SP 100 and SP 600 stock indices. The forecasting performance of the proposed approach is compared with other modeling strategies based on log predictive scores. ValueatRisk evaluation is also preformed for model comparisons.
Nonlocal Models of Cosmic Acceleration ; I review a class of nonlocally modified gravity models which were proposed to explain the current phase of cosmic acceleration without dark energy. Among the topics considered are deriving causal and conserved field equations, adjusting the model to make it support a given expansion history, why these models do not require an elaborate screening mechanism to evade solar system tests, degrees of freedom and kinetic stability, and the negative verdict of structure formation. Although these simple models are not consistent with data on the growth of cosmic structures many of their features are likely to carry over to more complicated models which are in better agreement with the data.
Analysis and Control of Beliefs in Social Networks ; In this paper, we investigate the problem of how beliefs diffuse among members of social networks. We propose an information flow model IFM of belief that captures how interactions among members affect the diffusion and eventual convergence of a belief. The IFM model includes a generalized Markov Graph GMG model as a social network model, which reveals that the diffusion of beliefs depends heavily on two characteristics of the social network characteristics, namely degree centralities and clustering coefficients. We apply the IFM to both converged belief estimation and belief control strategy optimization. The model is compared with an IFM including the BarabasiAlbert model, and is evaluated via experiments with published real social network data.
The model of a level crossing with a Coulomb band exact probabilities of nonadiabatic transitions ; We derive an exact solution of an explicitly timedependent multichannel model of quantum mechanical nonadiabatic transitions. Our model corresponds to the case of a single linear diabatic energy level interacting with a band of an arbitrary N states, for which the diabatic energies decay with time according to the Coulomb law. We show that the timedependent Schroedingier equation for this system can be solved in terms of Meijer functions whose asymptotics at a large time can be compactly written in terms of elementary functions that depend on the roots of an Nth order characteristic polynomial. Our model can be considered a generalization of the DemkovOsherov model. In comparison to the latter, our model allows one to explore the role of curvature of the band levels and diabatic avoided crossings.
Modeling and Predicting Popularity Dynamics via Reinforced Poisson Processes ; An ability to predict the popularity dynamics of individual items within a complex evolving system has important implications in an array of areas. Here we propose a generative probabilistic framework using a reinforced Poisson process to model explicitly the process through which individual items gain their popularity. This model distinguishes itself from existing models via its capability of modeling the arrival process of popularity and its remarkable power at predicting the popularity of individual items. It possesses the flexibility of applying Bayesian treatment to further improve the predictive power using a conjugate prior. Extensive experiments on a longitudinal citation dataset demonstrate that this model consistently outperforms existing popularity prediction methods.
Exit probability in inflow dynamics nonuniversality induced by range, asymmetry and fluctuation ; Probing deeper into the existing issues regarding the exit probability EP in one dimensional dynamical models, we consider several models where the states are represented by Ising spins and the information flows inwards. At zero temperature, these systems evolve to either of two absorbing states. The exit probability Ex, which is the probability that the system ends up with all spins up starting with x fraction of up spins is found to have the general form Ex xalphaleftxalpha 1xalpharight. The exit probability exponent alpha strongly depends on r, the range of interaction, the symmetry of the model and the induced fluctuation. Even in a nearest neighbour model, nonlinear form of EP can be obtained by controlling the fluctuations and for the same range, different models give different results for alpha. Nonuniversal behaviour of the exit probability is thus clearly established and the results are compared to existing studies in models with outflow dynamics to distinguish the two dynamical scenarios.
Content Modeling Using Latent Permutations ; We present a novel Bayesian topic model for learning discourselevel document structure. Our model leverages insights from discourse theory to constrain latent topic assignments in a way that reflects the underlying organization of document topics. We propose a global model in which both topic selection and ordering are biased to be similar across a collection of related documents. We show that this space of orderings can be effectively represented using a distribution over permutations called the Generalized Mallows Model. We apply our method to three complementary discourselevel tasks crossdocument alignment, document segmentation, and information ordering. Our experiments show that incorporating our permutationbased model in these applications yields substantial improvements in performance over previously proposed methods.
Superconducting proximity effect on a twodimensional Dirac electron system ; The superconducting proximity effect on twodimensional massless Dirac electrons is usually analyzed using a simple model consisting of the Dirac Hamiltonian and an energyindependent pair potential. Although this conventional model is plausible, it is questionable whether it can fully describe the proximity effect from a superconductor. Here, we derive a more general proximity model starting from an appropriate microscopic model for the Dirac electron system in planar contact with a superconductor. The resulting model describes the proximity effect in terms of the energydependent pair potential and renormalization term. Within this framework, we analyze the density of states, the quasiparticle wave function, and the charge conservation of Dirac electrons. The result reveals several characteristic features of the proximity effect, which cannot be captured within the conventional model.
A thermodynamically consistent phasefield model for twophase flows with thermocapillary effects ; In this paper, we develop a phasefield model for binary incompressible quasiincompressible fluid with thermocapillary effects, which allows for the different properties densities, viscosities and heat conductivities of each component while maintaining thermodynamic consistency. The governing equations of the model including the NavierStokes equations with additional stress term, CahnHilliard equations and energy balance equation are derived within a thermodynamic framework based on entropy generation, which guarantees thermodynamic consistency. A sharpinterface limit analysis is carried out to show that the interfacial conditions of the classical sharpinterface models can be recovered from our phasefield model. Moreover, some numerical examples including thermocapillary convections in a twolayer fluid system and thermocapillary migration of a drop are computed using a continuous finite element method. The results are compared to the corresponding analytical solutions and the existing numerical results as validations for our model.
A Prediction Model for System Testing Defects using Regression Analysis ; This research describes the initial effort of building a prediction model for defects in system testing carried out by an independent testing team. The motivation to have such defect prediction model is to serve as early quality indicator of the software entering system testing and assist the testing team to manage and control test execution activities. Metrics collected from prior phases to system testing are identified and analyzed to determine the potential predictors for building the model. The selected metrics are then put into regression analysis to generate several mathematical equations. Mathematical equation that has pvalue of less than 0.05 with Rsquared and Rsquared adjusted more than 90 is selected as the desired prediction model for system testing defects. This model is verified using new projects to confirm that it is fit for actual implementation.
Flexible behavioral capturerecapture modelling ; We develop some new strategies for building and fitting new flexible classes of parametric capturerecapture models for closed populations which can be used to address a better understanding of behavioural patterns. We first rely on a conditional probability parameterization and review how to regard a large subset of standard capturerecapture models as a suitable partitioning in equivalence classes of the full set of conditional probability parameters. We then propose the use of new suitable quantifications of the conditioning binary partial capture histories as a device for enlarging the scope of flexible behavioural models and also exploring the range of all possible partitions. We show how one can easily find unconditional MLE of such models within a generalized linear model framework. We illustrate the potential of our approach with the analysis of some known datasets and a simulation study.
A Shannon Approach to Secure Multiparty Computations ; In secure multiparty computations SMC, parties wish to compute a function on their private data without revealing more information about their data than what the function reveals. In this paper, we investigate two Shannontype questions on this problem. We first consider the traditional oneshot model for SMC which does not assume a probabilistic prior on the data. In this model, private communication and randomness are the key enablers to secure computing, and we investigate a notion of randomness cost and capacity. We then move to a probabilistic model for the data, and propose a Shannon model for discrete memoryless SMC. In this model, correlations among data are the key enablers for secure computing, and we investigate a notion of dependency which permits the secure computation of a function. While the models and questions are general, this paper focuses on summation functions, and relies on polar code constructions.
Marginal and simultaneous predictive classification using stratified graphical models ; An inductive probabilistic classification rule must generally obey the principles of Bayesian predictive inference, such that all observed and unobserved stochastic quantities are jointly modeled and the parameter uncertainty is fully acknowledged through the posterior predictive distribution. Several such rules have been recently considered and their asymptotic behavior has been characterized under the assumption that the observed features or variables used for building a classifier are conditionally independent given a simultaneous labeling of both the training samples and those from an unknown origin. Here we extend the theoretical results to predictive classifiers acknowledging feature dependencies either through graphical models or sparser alternatives defined as stratified graphical models. We also show through experimentation with both synthetic and real data that the predictive classifiers based on stratified graphical models have consistently best accuracy compared with the predictive classifiers based on either conditionally independent features or on ordinary graphical models.
Structured Prediction of Sequences and Trees using Infinite Contexts ; Linguistic structures exhibit a rich array of global phenomena, however commonly used Markov models are unable to adequately describe these phenomena due to their strong locality assumptions. We propose a novel hierarchical model for structured prediction over sequences and trees which exploits global context by conditioning each generation decision on an unbounded context of prior decisions. This builds on the success of Markov models but without imposing a fixed bound in order to better represent global phenomena. To facilitate learning of this large and unbounded model, we use a hierarchical PitmanYor process prior which provides a recursive form of smoothing. We propose prediction algorithms based on A and Markov Chain Monte Carlo sampling. Empirical results demonstrate the potential of our model compared to baseline finitecontext Markov models on partofspeech tagging and syntactic parsing.
Convolutional Neural Network Architectures for Matching Natural Language Sentences ; Semantic matching is of central importance to many natural language tasks citebordes2014semantic,RetrievalQA. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layerbylayer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.
Spectral solution of urn models for interacting particle systems ; Using generating function methods for diagonalizing the transition matrix in 2Urn models, we provide a complete classification into solvable and unsolvable subclasses, with further division of the solvable models into the Martingale and nonMartingale subcategories, and prove that the stationary distribution is a Gaussian function in the latter. We also give a natural condition related to the symmetry of the random walk in which the nonMartingale Urn models lead to an increase in entropy from Gaussian states. The condition also shows that universal symmetry in the macrostate is equivalent to increasing entropy. Certain models of social opinion dynamics, treated as Urn models, do not increase in entropy, unlike isolated mechanical systems.
The Knowledge Gradient Policy Using A Sparse Additive Belief Model ; We propose a sequential learning policy for noisy discrete global optimization and ranking and selection RS problems with high dimensional sparse belief functions, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power. We aim to identify the sparsity pattern and select the best alternative before the finite budget is exhausted. We derive a knowledge gradient policy for sparse linear models KGSpLin with group Lasso penalty. This policy is a unique and novel hybrid of Bayesian RS with frequentist learning. Particularly, our method naturally combines Bspline basis expansion and generalizes to the nonparametric additive model KGSpAM and functional ANOVA model. Theoretically, we provide the estimation error bounds of the posterior mean estimate and the functional estimate. Controlled experiments show that the algorithm efficiently learns the correct set of nonzero parameters even when the model is imbedded with hundreds of dummy parameters. Also it outperforms the knowledge gradient for a linear model.
Modeling context and situations in pervasive computing environments ; In pervasive computing environments, various entities often have to cooperate and integrate seamlessly in a emphsituation which can, thus, be considered as an amalgamation of the context of several entities interacting and coordinating with each other, and often performing one or more activities. However, none of the existing context models and ontologies address situation modeling. In this paper, we describe the design, structure and implementation of a generic, flexible and extensible context ontology called Rover Context Model Ontology RoCoMO for context and situation modeling in pervasive computing systems and environments. We highlight several limitations of the existing context models and ontologies, such as lack of provision for provenance, traceability, quality of context, multiple representation of contextual information, as well as support for security, privacy and interoperability, and explain how we are addressing these limitations in our approach. We also illustrate the applicability and utility of RoCoMO using a practical and extensive case study.
Stable Feature Selection from Brain sMRI ; Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model, is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer's disease. In addition to sparsity, our model incorporates two important pathological priors the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other stateofthearts.
Double multiplerelaxationtime lattice Boltzmann model for solidliquid phase change with natural convection in porous media ; In this paper, a double multiplerelaxationtime lattice Boltzmann model is developed for simulating transient solidliquid phase change problems in porous media at the representative elementary volume scale. The model uses two different multiplerelaxationtime lattice Boltzmann equations, one for the flow field and the other for the temperature field with nonlinear latent heat source term. The model is based on the generalized nonDarcy formulation, and the solidliquid phase change interface is traced through the liquid fraction which is determined by the enthalpy method. The model is validated by numerical simulations of conduction melting in a semiinfinite space, solidification in a semiinfinite corner, and convection melting in a square cavity filled with porous media. The numerical results demonstrate the efficiency and accuracy of the present model for simulating transient solidliquid phase change problems in porous media.
Lepton flavor violation beyond the MSSM ; Most extensions of the Standard Model lepton sector predict large lepton flavor violating rates. Given the promising experimental perspectives for lepton flavor violation in the next few years, this generic expectation might offer a powerful indirect probe to look for new physics. In this review we will cover several aspects of lepton flavor violation in supersymmetric models beyond the Minimal Supersymmetric Standard Model. In particular, we will concentrate on three different scenarios highscale and lowscale seesaw models as well as models with Rparity violation. We will see that in some cases the LFV phenomenology can have characteristic features for specific scenarios, implying that dedicated studies must be performed in order to correctly understand the phenomenology in nonminimal supersymmetric models.
MORE Merged Opinions Reputation Model ; Reputation is generally defined as the opinion of a group on an aspect of a thing. This paper presents a reputation model that follows a probabilistic modelling of opinions based on three main concepts 1 the value of an opinion decays with time, 2 the reputation of the opinion source impacts the reliability of the opinion, and 3 the certainty of the opinion impacts its weight with respect to other opinions. Furthermore, the model is flexible with its opinion sources it may use explicit opinions or implicit opinions that can be extracted from agent behavior in domains where explicit opinions are sparse. We illustrate the latter with an approach to extract opinions from behavioral information in the sports domain, focusing on football in particular. One of the uses of a reputation model is predicting behavior. We take up the challenge of predicting the behavior of football teams in football matches, which we argue is a very interesting yet difficult approach for evaluating the model.
On the microtomacro limit for firstorder traffic flow models on networks ; Connections between microscopic followtheleader and macroscopic fluiddynamics traffic flow models are already well understood in the case of vehicles moving on a single road. Analogous connections in the case of road networks are instead lacking. This is probably due to the fact that macroscopic traffic models on networks are in general illposed, since the conservation of the mass is not sufficient alone to characterize a unique solution at junctions. This ambiguity makes more difficult to find the right limit of the microscopic model, which, in turn, can be defined in different ways near the junctions. In this paper we show that a natural extension of the firstorder followtheleader model on networks corresponds, as the number of vehicles tends to infinity, to the LWRbased multipath model introduced in Bretti et al., Discrete Contin. Dyn. Syst. Ser. S, 7 2014 and Briani and Cristiani, Netw. Heterog. Media, 9 2014.
Dynamical Quantum Phase Transitions in the Kitaev Honeycomb Model ; The notion of a dynamical quantum phase transition DQPT was recently introduced in Heyl et al., Phys. Rev. Lett. 110, 135704 2013 as the nonanalytic behavior of the Loschmidt echo at critical times in the thermodynamic limit. In this work the quench dynamics in the ground state sector of the twodimensional Kitaev honeycomb model are studied regarding the occurrence of DQPTs. For general twodimensional systems of BCStype it is demonstrated how the zeros of the Loschmidt echo coalesce to areas in the thermodynamic limit, implying that DQPTs occur as discontinuities in the second derivative. In the Kitaev honeycomb model DQPTs appear after quenches across a phase boundary or within the massless phase. In the 1d limit of the Kitaev honeycomb model it becomes clear that the discontinuity in the higher derivative is intimately related to the higher dimensionality of the nondegenerate model. Moreover, there is a strong connection between the stationary value of the rate function of the Loschmidt echo after long times and the occurrence of DQPTs in this model.
CAT2000 A Large Scale Fixation Dataset for Boosting Saliency Research ; Saliency modeling has been an active research area in computer vision for about two decades. Existing state of the art models perform very well in predicting where people look in natural scenes. There is, however, the risk that these models may have been overfitting themselves to available small scale biased datasets, thus trapping the progress in a local minimum. To gain a deeper insight regarding current issues in saliency modeling and to better gauge progress, we recorded eye movements of 120 observers while they freely viewed a large number of naturalistic and artificial images. Our stimuli includes 4000 images; 200 from each of 20 categories covering different types of scenes such as Cartoons, Art, Objects, Low resolution images, Indoor, Outdoor, Jumbled, Random, and Line drawings. We analyze some basic properties of this dataset and compare some successful models. We believe that our dataset opens new challenges for the next generation of saliency models and helps conduct behavioral studies on bottomup visual attention.
Bs0barBs0 mixing within minimal flavorviolating twoHiggsdoublet models ; In the Higgs basis for a generic 2HDM, only one scalar doublet gets a nonzero vacuum expectation value and, under the criterion of minimal flavor violation, the other one is fixed to be either colorsinglet or coloroctet, which are named as the typeIII and typeC models, respectively. In this paper, the chargedHiggs effects of these two models on Bs0barBs0 mixing are studied. Firstly, we perform a complete oneloop computation of the electroweak corrections to the amplitudes of Bs0barBs0 mixing. Together with the uptodate experimental measurements, a detailed phenomenological analysis is then performed in the cases of both real and complex Yukawa couplings of charged scalars to quarks. The spaces of model parameters allowed by the current experimental data on Bs0barBs0 mixing are obtained and the differences between typeIII and typeC models are investigated, which is helpful to distinguish between these two models.
Mapping the ChevallierPolarskiLinder parametrization onto Physical Dark Energy Models ; We examine the ChevallierPolarskiLinder CPL parametrization, in the context of quintessence and barotropic dark energy models, to determine the subset of such models to which it can provide a good fit. The CPL parametrization gives the equation of state parameter w for the dark energy as a linear function of the scale factor a, namely w w0 wa1a. In the case of quintessence models, we find that over most of the w0, wa parameter space the CPL parametrization maps onto a fairly narrow form of behavior for the potential Vphi, while a onedimensional subset of parameter space, for which wa kappa 1w0, with kappa constant, corresponds to a wide range of functional forms for Vphi. For barotropic models, we show that the functional dependence of the pressure on the density, up to a multiplicative constant, depends only on wi wa w0 and not on w0 and wa separately. Our results suggest that the CPL parametrization may not be optimal for testing either type of model.
RaoBlackwellized particle smoothers for conditionally linear Gaussian models ; Sequential Monte Carlo SMC methods, such as the particle filter, are by now one of the standard computational techniques for addressing the filtering problem in general statespace models. However, many applications require postprocessing of data offline. In such scenarios the smoothing problemin which all the available data is used to compute state estimatesis of central interest. We consider the smoothing problem for a class of conditionally linear Gaussian models. We present a forwardbackwardtype RaoBlackwellized particle smoother RBPS that is able to exploit the tractable substructure present in these models. Akin to the well known RaoBlackwellized particle filter, the proposed RBPS marginalizes out a conditionally tractable subset of state variables, effectively making use of SMC only for the intractable part of the model. Compared to existing RBPS, two key features of the proposed method are i it does not require structural approximations of the model, and ii the aforementioned marginalization is done both in the forward direction and in the backward direction.
Flavour changing Z' signals in a 6D inspired model ; We consider the phenomenology of new neutral gauge bosons with flavour nondiagonal couplings to fermions, inherent in 6D models explaining successfully the hierarchy of masses as well as the mixing for quarks, charged leptons and neutrinos this model can in particular be credited with the correct prediction of the neutrino mixing angle theta13. We present a general relation between masses of new gauge bosons and their couplings to fermions. We show that in the current realization of the model, the new heavy bosons are unreachable at LHC but argue why the constraint could be relaxed in the context of a different realization. In view of a more systematic study, we use an effective model inspired by the above to relate directly rare meson decays to possible LHC observations. In terms of effective Lagrangians, this can be seen as the introduction in the model of only one overall scaling parameter to extend our approach without modifying the 4D gauge structure.
Risks aggregation in multivariate dependent Pareto distributions ; In this paper we obtain closed expressions for the probability distribution function, when we consider aggregated risks with multivariate dependent Pareto distributions. We work with the dependent multivariate Pareto type II proposed by Arnold 1983, 2015, which is widely used in insurance and risk analysis. We begin with the individual risk model, where we obtain the probability density function PDF, which corresponds to a second kind beta distribution. We obtain several risk measures including the VaR, TVaR and other tail measures. Then, we consider collective risk model based on dependence, where several general properties are studied. We study in detail some relevant collective models with Poisson, negative binomial and logarithmic distributions as primary distributions. In the collective ParetoPoisson model, the PDF is a function of the Kummer confluent hypergeometric function, and in the Paretonegative binomial is a function of the Gauss hypergeometric function. Using the data set based on oneyear vehicle insurance policies taken out in 20042005 Jong and Heller, 2008, we conclude that our collective dependent models outperform the classical collective models Poissonexponential and geometricexponential in terms of the AIC and CAIC statistics.
An objective prior that unifies objective Bayes and informationbased inference ; There are three principle paradigms of statistical inference i Bayesian, ii informationbased and iii frequentist inference. We describe an objective prior the weighting or wprior which unifies objective Bayes and informationbased inference. The wprior is chosen to make the marginal probability an unbiased estimator of the predictive performance of the model. This definition has several other natural interpretations. From the perspective of the information content of the prior, the wprior is both uniformly and maximally uninformative. The wprior can also be understood to result in a uniform density of distinguishable models in parameter space. Finally we demonstrate the the wprior is equivalent to the Akaike Information Criterion AIC for regular models in the asymptotic limit. The wprior appears to be generically applicable to statistical inference and is free of it ad hoc regularization. The mechanism for suppressing complexity is analogous to AIC model complexity reduces model predictivity. We expect this new objectiveBayes approach to inference to be widelyapplicable to machinelearning problems including singular models.
Evolution of entropic dark energy and its phantom nature ; Assuming the form of the entropic dark energy as arises form the surface term in the EinsteinHilbert's action, it's evolution were analyzed in an expanding flat universe. The model parameters were evaluated by constraining model using the Union data on Type Ia supernovae. We found that the model predicts an early decelerated phase and a later accelerated phase at the background level. The evolution of the Hubble parameter, dark energy density, equation of state parameter and deceleration parameter were obtained. The model is diagnosed with Om parameter. The model is hardly seems to be supporting the linear perturbation growth for the structure formation. We also found that the entropic dark energy shows phantom nature for redshifts z0.257. During the phantom epoch, the model predicts bigrip effect at which both the scale factor of expansion and the dark energy density become infinitely large and the big rip time is found to be around 36 Giga Years from now.
On the Equivalence of Factorized Information Criterion Regularization and the Chinese Restaurant Process Prior ; Factorized Information Criterion FIC is a recently developed information criterion, based on which a novel model selection methodology, namely Factorized Asymptotic Bayesian FAB Inference, has been developed and successfully applied to various hierarchical Bayesian models. The Dirichlet Process DP prior, and one of its well known representations, the Chinese Restaurant Process CRP, derive another line of model selection methods. FIC can be viewed as a prior distribution over the latent variable configurations. Under this view, we prove that when the parameter dimensionality Dc2, FIC is equivalent to CRP. We argue that when Dc2, FIC avoids an inherent problem of DPCRP, i.e. the data likelihood will dominate the impact of the prior, and thus the model selection capability will weaken as Dc increases. However, FIC overestimates the data likelihood. As a result, FIC may be overly biased towards models with less components. We propose a natural generalization of FIC, which finds a middle ground between CRP and FIC, and may yield more accurate model selection results than FIC.
Slave fermion formalism for the tetrahedral spin chain ; We use the SU2 slave fermion approach to study a tetrahedral spin 12 chain, which is a onedimensional generalization of the two dimensional Kitaev honeycomb model. Using the mean field theory, coupled with a gauge fixing procedure to implement the single occupancy constraint, we obtain the phase diagram of the model. We then show that it matches the exact results obtained earlier using the Majorana fermion representation. We also compute the spinspin correlation in the gapless phase and show that it is a spin liquid. Finally, we map the onedimensional model in terms of the slave fermions to the model of 1D pwave superconducting model with complex parameters and show that the parameters of our model fall in the topological trivial regime and hence does not have edge Majorana modes.
Dynamic Spatial Autoregressive Models with Autoregressive and Heteroskedastic Disturbances ; We propose a new class of models specifically tailored for spatiotemporal data analysis. To this end, we generalize the spatial autoregressive model with autoregressive and heteroskedastic disturbances, i.e. SARAR1,1, by exploiting the recent advancements in Score Driven SD models typically used in time series econometrics. In particular, we allow for timevarying spatial autoregressive coefficients as well as timevarying regressor coefficients and crosssectional standard deviations. We report an extensive Monte Carlo simulation study in order to investigate the finite sample properties of the Maximum Likelihood estimator for the new class of models as well as its flexibility in explaining several dynamic spatial dependence processes. The new proposed class of models are found to be economically preferred by rational investors through an application in portfolio optimization.
Evaluation of Protein Structural Models Using Random Forests ; Protein structure prediction has been a grand challenge problem in the structure biology over the last few decades. Protein quality assessment plays a very important role in protein structure prediction. In the paper, we propose a new protein quality assessment method which can predict both local and global quality of the protein 3D structural models. Our method uses both multi and single model quality assessment method for global quality assessment, and uses chemical, physical, geometrical features, and global quality score for local quality assessment. CASP9 targets are used to generate the features for local quality assessment. We evaluate the performance of our local quality assessment method on CASP10, which is comparable with two stageofart QA methods based on the average absolute distance between the real and predicted distance. In addition, we blindly tested our method on CASP11, and the good performance shows that combining single and multiple model quality assessment method could be a good way to improve the accuracy of model quality assessment, and the random forest technique could be used to train a good local quality assessment model.
Stochastic orders and the frog model ; The frog model starts with one active particle at the root of a graph and some number of dormant particles at all nonroot vertices. Active particles follow independent random paths, waking all inactive particles they encounter. We prove that certain frog model statistics are monotone in the initial configuration for two nonstandard stochastic dominance relations the increasing concave and the probability generating function orders. This extends many canonical theorems. We connect recurrence for random initial configurations to recurrence for deterministic configurations. Also, the limiting shape of activated sites on the integer lattice respects both of these orders. Other implications include monotonicity results on transience of the frog model where the number of frogs per vertex decays away from the origin, on survival of the frog model with death, and on the time to visit a given vertex in any frog model.
A class of linear viscoelastic models based on Bessel functions ; In this paper we investigate a general class of linear viscoelastic models whose creep and relaxation memory functions are expressed in Laplace domain by suitable ratios of modified Bessel functions of contiguous order. In time domain these functions are shown to be expressed by Dirichlet series that is infinite Prony series. It follows that the corresponding creep compliance and relaxation modulus turn out to be characterized by infinite discrete spectra of retardation and relaxation time respectively. As a matter of fact, we get a class of viscoelastic models depending on a real parameter nu 1. Such models exhibit rheological properties akin to those of a fractional Maxwell model of order 12 for short times and of a standard Maxwell model for long times.
Efficient computation of Sobol' indices for stochastic models ; Stochastic models are necessary for the realistic description of an increasing number of applications. The ability to identify influential parameters and variables is critical to a thorough analysis and understanding of the underlying phenomena. We present a new global sensitivity analysis approach for stochastic models, i.e., models with both uncertain parameters and intrinsic stochasticity. Our method relies on an analysis of variance through a generalization of Sobol' indices and on the use of surrogate models. We show how to efficiently compute the statistical properties of the resulting indices and illustrate the effectiveness of our approach by computing first order Sobol' indices for two stochastic models.
Modeling and stabilization results for a charge or currentactuated active constrained layer ACL beam model with the electrostatic assumption ; An infinite dimensional model for a threelayer active constrained layer ACL beam model, consisting of a piezoelectric elastic layer at the top and an elastic host layer at the bottom constraining a viscoelastic layer in the middle, is obtained for clampedfree boundary conditions by using a thorough variational approach. The RaoNakra thin compliant layer approximation is adopted to model the sandwich structure, and the electrostatic approach magnetic effects are ignored is assumed for the piezoelectric layer. Instead of the voltage actuation of the piezoelectric layer, the piezoelectric layer is proposed to be activated by a charge or current source. We show that, the closedloop system with all mechanical feedback is shown to be uniformly exponentially stable. Our result is the outcome of the compact perturbation argument and a unique continuation result for the spectral problem which relies on the multipliers method. Finally, the modeling methodology of the paper is generalized to the multilayer ACL beams, and the uniform exponential stabilizability result is established analogously.
Modelling of lung cancer survival data for critical illness insurances ; We derive a general multiple state model for critical illness insurances. In contrast to the classical model, we take into account that the probability of death for a dread disease sufferer may depend on the duration of the disease, and the payment of benefits associated with a severe disease depends not only on the diagnosis but also on the disease stage. We apply the introduced model to the analysis of a critical illness insurance against the risk of lung cancer. Based on the real data for the Lower Silesian Voivodship in Poland, we estimate the transition matrix, related to the discretetime Markov model. The obtained probabilistic structure of the model can be directly used to cost not only critical illness insurances and life insurances with accelerated death benefits option, but also to viatical settlement contracts.
HNP3 A Hierarchical Nonparametric Point Process for Modeling Content Diffusion over Social Media ; This paper introduces a novel framework for modeling temporal events with complex longitudinal dependency that are generated by dependent sources. This framework takes advantage of multidimensional point processes for modeling time of events. The intensity function of the proposed process is a mixture of intensities, and its complexity grows with the complexity of temporal patterns of data. Moreover, it utilizes a hierarchical dependent nonparametric approach to model marks of events. These capabilities allow the proposed model to adapt its temporal and topical complexity according to the complexity of data, which makes it a suitable candidate for real world scenarios. An online inference algorithm is also proposed that makes the framework applicable to a vast range of applications. The framework is applied to a real world application, modeling the diffusion of contents over networks. Extensive experiments reveal the effectiveness of the proposed framework in comparison with stateoftheart methods.
Computational linking theory ; A linking theory explains how verbs' semantic arguments are mapped to their syntactic argumentsthe inverse of the Semantic Role Labeling task from the shallow semantic parsing literature. In this paper, we develop the Computational Linking Theory framework as a method for implementing and testing linking theories proposed in the theoretical literature. We deploy this framework to assess two crosscutting types of linking theory local v. global models and categorical v. featural models. To further investigate the behavior of these models, we develop a measurement model in the spirit of previous work in semantic role induction the Semantic ProtoRole Linking Model. We use this model, which implements a generalization of Dowty's seminal ProtoRole Theory, to induce semantic protoroles, which we compare to those Dowty proposes.