text
stringlengths
62
2.94k
IMRPhenomXHM A multimode frequencydomain model for the gravitational wave signal from nonprecessing blackhole binaries ; We present the IMRPhenomXHM frequency domain phenomenological waveform model for the inspiral, merger and ringdown of quasicircular nonprecessing black hole binaries. The model extends the IMRPhenomXAS waveform model, which describes the dominant quadrupole modes ell m 2, to the harmonics ell, m2,1, 3,3, 3,2, 4,4, and includes mode mixing effects for the 3,2 spherical harmonic. IMRPhenomXHM is calibrated against hybrid waveforms, which match an inspiral phase described by the effectiveonebody model and postNewtonian amplitudes for the subdominant harmonics to numerical relativity waveforms and numerical solutions to the perturbative Teukolsky equation for large mass ratios up to 1000. A computationally efficient implementation of the model is available as part of the LSC Algorithm Library Suite.
Coarsegrained Models of Aqueous Solutions of Polyelectrolytes Significance of Explicit Charges ; The structure of polyelectrolytes is highly sensitive to small changes in the interactions between its monomers. In particular, interactions mediated by counterions play a significant role, and are affected by both specific molecular effects and generic concentration effects. The ability of coarsegrained models to reproduce the structural properties of an atomic model is thus a challenging task. Our present study compares the ability of different kinds of coarsegrained models ito reproduce the structure of an atomistic model of a polyelectrolyte the sodium polyacrylate, iito reproduce the variations of this structure with the number of monomers and with the concentration of the different species. We show that the adequate scalings of the gyration radius of the polymerRrm g with the number of monomersN and with the box sizeLrm box are only obtained, first, if the monomer charges and the counterions are explicitely described, and second, if an attractive LennardJones contribution is added to the interaction between distant monomers. Also, we show that implicit ion models are relevant only to the high electrostatic screening regime.
Numerical Bifurcation Analysis of Pacemaker Dynamics in a Model of Smooth Muscle Cells ; Evidence from experimental studies shows that oscillations due to electromechanical coupling can be generated spontaneously in smooth muscle cells. Such cellular dynamics are known as textitpacemaker dynamics. In this article we address pacemaker dynamics associated with the interaction of textCa2 and textK fluxes in the cell membrane of a smooth muscle cell. First we reduce a pacemaker model to a twodimensional system equivalent to the reduced MorrisLecar model and then perform a detailed numerical bifurcation analysis of the reduced model. Existing bifurcation analyses of the MorrisLecar model concentrate on external applied current whereas we focus on parameters that model the response of the cell to changes in transmural pressure. We reveal a transition between Type I and Type II excitabilities with no external current required. We also compute a twoparameter bifurcation diagram and show how the transition is explained by the bifurcation structure.
Timevarying volatility in Bitcoin market and information flow at minutelevel frequency ; In this paper, we analyze the timeseries of minute price returns on the Bitcoin market through the statistical models of generalized autoregressive conditional heteroskedasticity GARCH family. Several mathematical models have been proposed in finance, to model the dynamics of price returns, each of them introducing a different perspective on the problem, but none without shortcomings. We combine an approach that uses historical values of returns and their volatilities GARCH family of models, with a socalled Mixture of Distribution Hypothesis, which states that the dynamics of price returns are governed by the information flow about the market. Using timeseries of Bitcoinrelated tweets and volume of transactions as external information, we test for improvement in volatility prediction of several GARCH model variants on a minute level Bitcoin price time series. Statistical tests show that the simplest GARCH1,1 reacts the best to the addition of external signal to model volatility process on outofsample data.
Towards Nontaskspecific Distillation of BERT via Sentence Representation Approximation ; Recently, BERT has become an essential ingredient of various NLP deep models due to its effectiveness and universalusability. However, the online deployment of BERT is often blocked by its largescale parameters and high computational cost. There are plenty of studies showing that the knowledge distillation is efficient in transferring the knowledge from BERT into the model with a smaller size of parameters. Nevertheless, current BERT distillation approaches mainly focus on taskspecified distillation, such methodologies lead to the loss of the general semantic knowledge of BERT for universalusability. In this paper, we propose a sentence representation approximating oriented distillation framework that can distill the pretrained BERT into a simple LSTM based model without specifying tasks. Consistent with BERT, our distilled model is able to perform transfer learning via finetuning to adapt to any sentencelevel downstream task. Besides, our model can further cooperate with taskspecific distillation procedures. The experimental results on multiple NLP tasks from the GLUE benchmark show that our approach outperforms other taskspecific distillation methods or even much larger models, i.e., ELMO, with efficiency wellimproved.
Directed Graphical Models and Causal Discovery for ZeroInflated Data ; Modern RNA sequencing technologies provide gene expression measurements from single cells that promise refined insights on regulatory relationships among genes. Directed graphical models are wellsuited to explore such causeeffect relationships. However, statistical analyses of single cell data are complicated by the fact that the data often show zeroinflated expression patterns. To address this challenge, we propose directed graphical models that are based on Hurdle conditional distributions parametrized in terms of polynomials in parent variables and their 01 indicators of being zero or nonzero. While directed graphs for Gaussian models are only identifiable up to an equivalence class in general, we show that, under a natural and weak assumption, the exact directed acyclic graph of our zeroinflated models can be identified. We propose methods for graph recovery, apply our model to real singlecell RNAseq data on T helper cells, and show simulated experiments that validate the identifiability and graph estimation methods in practice.
Knowledge Distillation for Mobile Edge Computation Offloading ; Edge computation offloading allows mobile end devices to put execution of computeintensive task on the edge servers. End devices can decide whether offload the tasks to edge servers, cloud servers or execute locally according to current network condition and devices' profile in an online manner. In this article, we propose an edge computation offloading framework based on Deep Imitation Learning DIL and Knowledge Distillation KD, which assists end devices to quickly make finegrained decisions to optimize the delay of computation tasks online. We formalize computation offloading problem into a multilabel classification problem. Training samples for our DIL model are generated in an offline manner. After model is trained, we leverage knowledge distillation to obtain a lightweight DIL model, by which we further reduce the model's inference delay. Numerical experiment shows that the offloading decisions made by our model outperforms those made by other related policies in latency metric. Also, our model has the shortest inference delay among all policies.
Interactions in information spread quantification and interpretation using stochastic block models ; In most realworld applications, it is seldom the case that a given observable evolves independently of its environment. In social networks, users' behavior results from the people they interact with, news in their feed, or trending topics. In natural language, the meaning of phrases emerges from the combination of words. In general medicine, a diagnosis is established on the basis of the interaction of symptoms. Here, we propose a new model, the Interactive Mixed Membership Stochastic Block Model IMMSBM, which investigates the role of interactions between entities hashtags, words, memes, etc. and quantifies their importance within the aforementioned corpora. We find that interactions play an important role in those corpora. In inference tasks, taking them into account leads to average relative changes with respect to noninteractive models of up to 150 in the probability of an outcome. Furthermore, their role greatly improves the predictive power of the model. Our findings suggest that neglecting interactions when modeling realworld phenomena might lead to incorrect conclusions being drawn.
Rule 184 fuzzy cellular automaton as a mathematical model for traffic flow ; The rule 184 fuzzy cellular automaton is regarded as a mathematical model of traffic flow because it contains the two fundamental traffic flow models, the rule 184 cellular automaton and the Burgers equation, as special cases. We show that the fundamental diagram fluxdensity diagram of this model consists of three parts a freeflow part, a congestion part and a twoperiodic part. The twoperiodic part, which may correspond to the synchronized mode region, is a twodimensional area in the diagram, the boundary of which consists of the freeflow and the congestion parts. We prove that any state in both the congestion and the twoperiodic parts is stable, but is not asymptotically stable, while that in the freeflow part is unstable. Transient behaviour of the model and bottleneck effects are also examined by numerical simulations. Furthermore, to investigate low or high density limit, we consider ultradiscrete limit of the model and show that any ultradiscrete state turns to a travelling wave state of velocity one in finite time steps for generic initial conditions.
PhysicsIncorporated Convolutional Recurrent Neural Networks for Source Identification and Forecasting of Dynamical Systems ; Spatiotemporal dynamics of physical processes are generally modeled using partial differential equations PDEs. Though the core dynamics follows some principles of physics, realworld physical processes are often driven by unknown external sources. In such cases, developing a purely analytical model becomes very difficult and datadriven modeling can be of assistance. In this paper, we present a hybrid framework combining physicsbased numerical models with deep learning for source identification and forecasting of spatiotemporal dynamical systems with unobservable timevarying external sources. We formulate our model PhICNet as a convolutional recurrent neural network RNN which is endtoend trainable for spatiotemporal evolution prediction of dynamical systems and learns the source behavior as an internal state of the RNN. Experimental results show that the proposed model can forecast the dynamics for a relatively long time and identify the sources as well.
Joint Bayesian Variable and DAG Selection Consistency for Highdimensional Regression Models with Networkstructured Covariates ; We consider the joint sparse estimation of regression coefficients and the covariance matrix for covariates in a highdimensional regression model, where the predictors are both relevant to a response variable of interest and functionally related to one another via a Gaussian directed acyclic graph DAG model. Gaussian DAG models introduce sparsity in the Cholesky factor of the inverse covariance matrix, and the sparsity pattern in turn corresponds to specific conditional independence assumptions on the underlying predictors. A variety of methods have been developed in recent years for Bayesian inference in identifying such networkstructured predictors in regression setting, yet crucial sparsity selection properties for these models have not been thoroughly investigated. In this paper, we consider a hierarchical model with spike and slab priors on the regression coefficients and a flexible and general class of DAGWishart distributions with multiple shape parameters on the Cholesky factors of the inverse covariance matrix. Under mild regularity assumptions, we establish the joint selection consistency for both the variable and the underlying DAG of the covariates when the dimension of predictors is allowed to grow much larger than the sample size. We demonstrate that our method outperforms existing methods in selecting networkstructured predictors in several simulation settings.
Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation ; We present an easy and efficient method to extend existing sentence embedding models to new languages. This allows to create multilingual versions from previously monolingual models. The training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. We use the original monolingual model to generate sentence embeddings for the source language and then train a new system on translated sentences to mimic the original model. Compared to other methods for training multilingual sentence embeddings, this approach has several advantages It is easy to extend existing models with relatively few samples to new languages, it is easier to ensure desired properties for the vector space, and the hardware requirements for training is lower. We demonstrate the effectiveness of our approach for 50 languages from various language families. Code to extend sentence embeddings models to more than 400 languages is publicly available.
ARMA Models for Zero Inflated Count Time Series ; Zero inflation is a common nuisance while monitoring disease progression over time. This article proposes a new observation driven model for zero inflated and overdispersed count time series. The counts given the past history of the process and available information on covariates is assumed to be distributed as a mixture of a Poisson distribution and a distribution degenerate at zero, with a time dependent mixing probability, pit. Since, count data usually suffers from overdispersion, a Gamma distribution is used to model the excess variation, resulting in a zero inflated Negative Binomial NB regression model with mean parameter lambdat. Linear predictors with auto regressive and moving average ARMA type terms, covariates, seasonality and trend are fitted to lambdat and pit through canonical link generalized linear models. Estimation is done using maximum likelihood aided by iterative algorithms, such as Newton Raphson NR and Expectation and Maximization EM. Theoretical results on the consistency and asymptotic normality of the estimators are given. The proposed model is illustrated using indepth simulation studies and a dengue data set.
On SICA models for HIV transmission ; We revisit the SICA SusceptibleInfectiousChronicAIDS mathematical model for transmission dynamics of the human immunodeficiency virus HIV with varying population size in a homogeneously mixing population. We consider SICA models given by systems of ordinary differential equations and some generalizations given by systems with fractional and stochastic differential operators. Local and global stability results are proved for deterministic, fractional, and stochastictype SICA models. Two case studies, in Cape Verde and Morocco, are investigated.
Brane Constantroll Inflation ; The scenario of constantroll inflation in the frame of the RSII brane gravity model is considered. Based on the scenario, the smallness of the second slowroll parameter is released and it is assumed as a constant which could be of the order of unity. Applying the HamiltonJacobi formalism, the constancy of the parameter gives a differential equation for the Hubble parameter which leads to an exact solution for the model. Reconsidering the perturbation equations we show there are some modified terms appearing in the amplitude of the scalar perturbations and in turn in the scalar spectral index and tensortoscalar ratio. Comparing the theoretical results of the model with observational data, the free parameters of the model are determined. Then, the consistency of the model with the swampland criteria is investigated for the obtained values of the free parameters. As the final step, the attractor behavior of the model is considered.
Modeling Friendship Networks among Agents with Personality Traits ; Using network analysis, psychologists have already found the nontrivial correlation between personality and social network structure. Despite the large amount of empirical studies, theoretical analysis and formal models behind such relationship are still lacking. To bridge this gap, we propose a generative model for friendship networks based on personality traits. To the best of our knowledge, this is the first work to explicitly introduce the concept of personality and friendship development into a social network model, with supporting insights from social and personality psychology. We use the model to investigate the effect of two personality traits, extraversion and agreeableness, on network structure. Analytical and simulation results both concur with recent empirical evidence that extraversion and agreeableness are positively correlated with degree. Using this model, we show that the effect of personality on friendship development can amount to the effect of personality on friendship network structure.
A Stochastic LQR Model for Child Order Placement in Algorithmic Trading ; Modern Algorithmic Trading Algo allows institutional investors and traders to liquidate or establish big security positions in a fully automated or lowtouch manner. Most existing academic or industrial Algos focus on how to slice a big parent order into smaller child orders over a given time horizon. Few models rigorously tackle the actual placement of these child orders. Instead, placement is mostly done with a combination of empirical signals and heuristic decision processes. A selfcontained, realistic, and fully functional Child Order Placement COP model may never exist due to all the inherent complexities, e.g., fragmentation due to multiple venues, dynamics of limit order books, lit vs. dark liquidity, different trading sessions and rules. In this paper, we propose a reductionism COP model that focuses exclusively on the interplay between placing passive limit orders and sniping using aggressive takeout orders. The dynamic programming model assumes the form of a stochastic linearquadratic regulator LQR and allows closedform solutions under the backward Bellman equations. Explored in detail are model assumptions and general settings, the choice of state and control variables and the cost functions, and the derivation of the closedform solutions.
Boilerplate Removal using a Neural Sequence Labeling Model ; The extraction of main content from web pages is an important task for numerous applications, ranging from usability aspects, like reader views for news articles in web browsers, to information retrieval or natural language processing. Existing approaches are lacking as they rely on large amounts of handcrafted features for classification. This results in models that are tailored to a specific distribution of web pages, e.g. from a certain time frame, but lack in generalization power. We propose a neural sequence labeling model that does not rely on any handcrafted features but takes only the HTML tags and words that appear in a web page as input. This allows us to present a browser extension which highlights the content of arbitrary web pages directly within the browser using our model. In addition, we create a new, more current dataset to show that our model is able to adapt to changes in the structure of web pages and outperform the stateoftheart model.
The Dielectric Skyrme model ; We consider a version of the Skyrme model where both the kinetic term and the Skyrme term are multiplied by fielddependent coupling functions. For suitable choices, this dielectric Skyrme model has static solutions saturating the pertinent topological bound in the sector of baryon number or topological charge Bpm 1 but not for higher B. This implies that higher charge field configurations are unbound, and loosely bound higher skyrmions can be achieved by small deformations of this dielectric Skyrme model. We provide a simple and explicit example for this possibility. Further, we show that the B1 BPS sector continues to exist for certain generalizations of the model like, for instance, after its coupling to a specific version of the BPS Skyrme model, i.e., the addition of the sextic term and a particular potential.
UnifiedQA Crossing Format Boundaries With a Single QA System ; Question answering QA tasks have been posed using a variety of formats, such as extractive span selection, multiple choice, etc. This has led to formatspecialized models, and even to an implicit division in the QA community. We argue that such boundaries are artificial and perhaps unnecessary, given the reasoning abilities we seek to teach are not governed by the format. As evidence, we use the latest advances in language modeling to build a single pretrained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par with 9 different models that were trained on individual datasets themselves. Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs surprisingly well, showing strong generalization from its outofformat training data. Finally, simply finetuning this pretrained QA model into specialized models results in a new state of the art on 6 datasets, establishing UnifiedQA as a strong starting point for building QA systems.
Joint MultiDimensional Model for Global and TimeSeries Annotations ; Crowdsourcing is a popular approach to collect annotations for unlabeled data instances. It involves collecting a large number of annotations from several, often naive untrained annotators for each data instance which are then combined to estimate the ground truth. Further, annotations for constructs such as affect are often multidimensional with annotators rating multiple dimensions, such as valence and arousal, for each instance. Most annotation fusion schemes however ignore this aspect and model each dimension separately. In this work we address this by proposing a generative model for multidimensional annotation fusion, which models the dimensions jointly leading to more accurate ground truth estimates. The model we propose is applicable to both global and time series annotation fusion problems and treats the ground truth as a latent variable distorted by the annotators. The model parameters are estimated using the ExpectationMaximization algorithm and we evaluate its performance using synthetic data and real emotion corpora as well as on an artificial task with human annotations
Evaluating Ensemble Robustness Against Adversarial Attacks ; Adversarial examples, which are slightly perturbed inputs generated with the aim of fooling a neural network, are known to transfer between models; adversaries which are effective on one model will often fool another. This concept of transferability poses grave security concerns as it leads to the possibility of attacking models in a black box setting, during which the internal parameters of the target model are unknown. In this paper, we seek to analyze and minimize the transferability of adversaries between models within an ensemble. To this end, we introduce a gradient based measure of how effectively an ensemble's constituent models collaborate to reduce the space of adversarial examples targeting the ensemble itself. Furthermore, we demonstrate that this measure can be utilized during training as to increase an ensemble's robustness to adversarial examples.
Bayesian Fusion for Infrared and Visible Images ; Infrared and visible image fusion has been a hot issue in image fusion. In this task, a fused image containing both the gradient and detailed texture information of visible images as well as the thermal radiation and highlighting targets of infrared images is expected to be obtained. In this paper, a novel Bayesian fusion model is established for infrared and visible images. In our model, the image fusion task is cast into a regression problem. To measure the variable uncertainty, we formulate the model in a hierarchical Bayesian manner. Aiming at making the fused image satisfy human visual system, the model incorporates the totalvariationTV penalty. Subsequently, the model is efficiently inferred by the expectationmaximizationEM algorithm. We test our algorithm on TNO and NIR image fusion datasets with several stateoftheart approaches. Compared with the previous methods, the novel model can generate better fused images with highlight targets and rich texture details, which can improve the reliability of the target automatic detection and recognition system.
Modeling Human Dynamics and Lifestyle Using Digital Traces ; Human behavior drives a range of complex social, urban, and economic systems, yet understanding its structure and dynamics at the individual level remains an open question. From credit card transactions to communications data, human behavior appears to exhibit bursts of activity driven by task prioritization and periodicity, however, current research does not offer generative models capturing these mechanisms. We propose a multivariate, periodic Hawkes process MPHP model that captures at the individual level the temporal clustering of human activity, the interdependence structure and coexcitation of different activities, and the periodic effects of weekly rhythms. We also propose a scalable parameter estimation technique for this model using maximumaposteriori expectationmaximization that additionally provides estimation of latent variables revealing branching structure of an individual's behavior patterns. We apply the model to a large dataset of credit card transactions, and demonstrate the MPHP outperforms a nonhomogeneous Poisson model and LDA in both statistical fit for the distribution of interevent times and an activity prediction task.
Toward ultrametric modeling of the epidemic spread ; An ultrametric model of epidemic spread of infections based on the classical SIR model is proposed. Ultrametrics on a set of individuals based on theire hierarchical clustering relativly to the average time of infectious contact is introduced. The general equations of the ultrametric SIR model are written down and their particular implementation using padic parameterization is presented. A numerical analysis of the padic SIR model and a comparison of its behavior with the classical SIR model are performed. The concept of hierarchical isolation and the scenario of its management in order to reduce the level of epidemic spread is considered.
Neural Controlled Differential Equations for Irregular Time Series ; Neural ordinary differential equations are an attractive option for modelling temporal dynamics. However, a fundamental issue is that the solution to an ordinary differential equation is determined by its initial condition, and there is no mechanism for adjusting the trajectory based on subsequent observations. Here, we demonstrate how this may be resolved through the wellunderstood mathematics of emphcontrolled differential equations. The resulting emphneural controlled differential equation model is directly applicable to the general setting of partiallyobserved irregularlysampled multivariate time series, and unlike previous work on this problem it may utilise memoryefficient adjointbased backpropagation even across observations. We demonstrate that our model achieves stateoftheart performance against similar ODE or RNN based models in empirical studies on a range of datasets. Finally we provide theoretical results demonstrating universal approximation, and that our model subsumes alternative ODE models.
Stochastic modeling of assets and liabilities with mortality risk ; This paper describes a general approach for stochastic modeling of assets returns and liability cashflows of a typical pensions insurer. On the asset side, we model the investment returns on equities and various classes of fixedincome instruments including short and longmaturity fixedrate bonds as well as indexlinked and corporate bonds. On the liability side, the risks are driven by future mortality developments as well as price and wage inflation. All the risk factors are modeled as a multivariate stochastic process that captures the dynamics and the dependencies across different risk factors. The model is easy to interpret and to calibrate to both historical data and to forecasts or expert views concerning the future. The simple structure of the model allows for efficient computations. The construction of a million scenarios takes only a few minutes on a personal computer. The approach is illustrated with an assetliability analysis of a defined benefit pension fund.
Transit cosmological models coupled with zeromass scalarfield with high redshift in higher derivative theory ; The present study deals with a flat FRW cosmological model filled with perfect fluid coupled with the zeromass scalar field in the higher derivative theory of gravity. We have obtained two types of universe models, the first one is the accelerating universe powerlaw cosmology and the second one is the transit phase model hyperbolic expansionlaw. We have obtained various physical and kinematic parameters and discussed them with observationally constrained values of H0. The transit redshift value is obtained zt0.414 where the transit model shows signatureflipping and is consistent with recent observations. In our models, the present values of EoS parameter omega0 crosses the cosmological constant value omega01. Also, the present age of the universe is calculated.
Model comparison for initial density fluctuations in high energy heavy ion collisions ; Four models for the initial conditions of a fluid dynamic description of high energy heavy ion collisions are analysed and compared. We study expectation values and eventbyevent fluctuations in the initial transverse energy density profiles from PbPb collisions. Specifically, introducing a FourierBessel mode expansion for fluctuations, we determine expectation values and twomode correlation functions of the expansion coefficients. The analytically solveable independent pointsources model is compared to an initial state model based on Glauber theory and two models based on the Color Glass Condensate framework. We find that the large wavelength modes of all investigated models show universal properties for central collisions and also discuss to which extent general properties of initial conditions can be understood analytically.
Strong zero modes from geometric chirality in quasionedimensional Mott insulators ; Strong zero modes provide a paradigm for quantum manybody systems to encode local degrees of freedom that remain coherent far from the ground state. Example systems include mathbbZn chiral quantum clock models with strong zero modes related to mathbbZn parafermions. Here we show how these models and their zero modes arise from geometric chirality in fermionic Mott insulators, focusing on n3 where the Mott insulators are threeleg ladders. We link such ladders to mathbbZ3 chiral clock models by combining bosonization with general symmetry considerations. We also introduce a concrete lattice model which we show to map to the mathbbZ3 chiral clock model, perturbed by the UiminLaiSutherland Hamiltonian arising via superexchange. We demonstrate the presence of strong zero modes in this perturbed model by showing that correlators of clock operators at the edge remain close to their initial value for times exponentially long in the system size, even at infinite temperature.
Hyperbolic model of internal solitary waves in a threelayer stratified fluid ; We derive a new hyperbolic model describing the propagation of internal waves in a stratified shallow water with a nonhydrostatic pressure distribution. The construction of the hyperbolic model is based on the use of additional instantaneous' variables. This allows one to reduce the dispersive multilayer GreenNaghdi model to a firstorder system of evolution equations. The main attention is paid to the study of threelayer flows over uneven bottom in the Boussinesq approximation with the additional assumption of hydrostatic pressure in the intermediate layer. The hyperbolicity conditions of the obtained equations of threelayer flows are formulated and solutions in the class of travelling waves are studied. Based on the proposed hyperbolic and dispersive models, numerical calculations of the generation and propagation of internal solitary waves are carried out and their comparison with experimental data is given. Within the framework of the proposed threelayer hyperbolic model, a numerical study of the propagation and interaction of symmetric and nonsymmetric solitonlike waves is performed.
Using Machine Learning to Forecast Future Earnings ; In this essay, we have comprehensively evaluated the feasibility and suitability of adopting the Machine Learning Models on the forecast of corporation fundamentals i.e. the earnings, where the prediction results of our method have been thoroughly compared with both analysts' consensus estimation and traditional statistical models. As a result, our model has already been proved to be capable of serving as a favorable auxiliary tool for analysts to conduct better predictions on company fundamentals. Compared with previous traditional statistical models being widely adopted in the industry like Logistic Regression, our method has already achieved satisfactory advancement on both the prediction accuracy and speed. Meanwhile, we are also confident enough that there are still vast potentialities for this model to evolve, where we do hope that in the near future, the machine learning model could generate even better performances compared with professional analysts.
Joint Stochastic Approximation and Its Application to Learning Discrete Latent Variable Models ; Although with progress in introducing auxiliary amortized inference models, learning discrete latent variable models is still challenging. In this paper, we show that the annoying difficulty of obtaining reliable stochastic gradients for the inference model and the drawback of indirectly optimizing the target loglikelihood can be gracefully addressed in a new method based on stochastic approximation SA theory of the RobbinsMonro type. Specifically, we propose to directly maximize the target loglikelihood and simultaneously minimize the inclusive divergence between the posterior and the inference model. The resulting learning algorithm is called joint SA JSA. To the best of our knowledge, JSA represents the first method that couples an SA version of the EM expectationmaximization algorithm SAEM with an adaptive MCMC procedure. Experiments on several benchmark generative modeling and structured prediction tasks show that JSA consistently outperforms recent competitive algorithms, with faster convergence, better final likelihoods, and lower variance of gradient estimates.
Towards Questionbased Recommender Systems ; Conversational and questionbased recommender systems have gained increasing attention in recent years, with users enabled to converse with the system and better control recommendations. Nevertheless, research in the field is still limited, compared to traditional recommender systems. In this work, we propose a novel Questionbased recommendation method, Qrec, to assist users to find items interactively, by answering automatically constructed and algorithmically chosen questions. Previous conversational recommender systems ask users to express their preferences over items or item facets. Our model, instead, asks users to express their preferences over descriptive item features. The model is first trained offline by a novel matrix factorization algorithm, and then iteratively updates the user and item latent factors online by a closedform solution based on the user answers. Meanwhile, our model infers the underlying user belief and preferences over items to learn an optimal questionasking strategy by using Generalized Binary Search, so as to ask a sequence of questions to the user. Our experimental results demonstrate that our proposed matrix factorization model outperforms the traditional Probabilistic Matrix Factorization model. Further, our proposed Qrec model can greatly improve the performance of stateoftheart baselines, and it is also effective in the case of coldstart user and item recommendations.
SigSDEs model for quantitative finance ; Mathematical models, calibrated to data, have become ubiquitous to make key decision processes in modern quantitative finance. In this work, we propose a novel framework for datadriven model selection by integrating a classical quantitative setup with a generative modelling approach. Leveraging the properties of the signature, a wellknown pathtransform from stochastic analysis that recently emerged as leading machine learning technology for learning timeseries data, we develop the SigSDE model. SigSDE provides a new perspective on neural SDEs and can be calibrated to exotic financial products that depend, in a nonlinear way, on the whole trajectory of asset prices. Furthermore, we our approach enables to consistently calibrate under the pricing measure mathbb Q and realworld measure mathbb P. Finally, we demonstrate the ability of SigSDE to simulate future possible market scenarios needed for computing risk profiles or hedging strategies. Importantly, this new model is underpinned by rigorous mathematical analysis, that under appropriate conditions provides theoretical guarantees for convergence of the presented algorithms.
Inference tools for Markov Random Fields on lattices The R package mrf2d ; Markov random fields on twodimensional lattices are behind many image analysis methodologies. mrf2d provides tools for statistical inference on a class of discrete stationary Markov random field models with pairwise interaction, which includes many of the popular models such as the Potts model and texture image models. The package introduces representations of dependence structures and parameters, visualization functions and efficient C based implementations of sampling algorithms, common estimation methods and other key features of the model, providing a useful framework to implement algorithms and working with the model in general. This paper presents a description and details of the package, as well as some reproducible examples of usage.
SurprisalTriggered Conditional Computation with Neural Networks ; Autoregressive neural network models have been used successfully for sequence generation, feature extraction, and hypothesis scoring. This paper presents yet another use for these models allocating more computation to more difficult inputs. In our model, an autoregressive model is used both to extract features and to predict observations in a stream of input observations. The surprisal of the input, measured as the negative loglikelihood of the current observation according to the autoregressive model, is used as a measure of input difficulty. This in turn determines whether a small, fast network, or a big, slow network, is used. Experiments on two speech recognition tasks show that our model can match the performance of a baseline in which the big network is always used with 15 fewer FLOPs.
Approaching a Bristol model ; The Bristol model is an inner model of Lc, where c is a Cohen real, which is not constructible from a set. The idea was developed in 2011 in a workshop taking place in Bristol, but was only written in detail by the author in 8. This paper is a guide for those who want to get a broader view of the construction. We try to provide more intuition that might serve as a jumping board for those interested in this construction and in odd models of mathsfZF. We also correct a few minor issues in the original paper, as well as prove new results. For example, that the Boolean Prime Ideal theorem fails in the Bristol model, as some sets cannot be linearly ordered, and that the ground model is always definable in its Bristol extensions. In addition to this we include a discussion on KinnaWagner Principles, which we think may play an important role in understanding the generic multiverse in mathsfZF.
A diffusionbased spatiotemporal extension of Gaussian Matern fields ; Gaussian random fields with Mat'ern covariance functions are popular models in spatial statistics and machine learning. In this work, we develop a spatiotemporal extension of the Gaussian Mat'ern fields formulated as solutions to a stochastic partial differential equation. The spatially stationary subset of the models have marginal spatial Mat'ern covariances, and the model also extends to WhittleMat'ern fields on curved manifolds, and to more general nonstationary fields. In addition to the parameters of the spatial dependence variance, smoothness, and practical correlation range it additionally has parameters controlling the practical correlation range in time, the smoothness in time, and the type of nonseparability of the spatiotemporal covariance. Through the separability parameter, the model also allows for separable covariance functions. We provide a sparse representation based on a finite element approximation, that is well suited for statistical inference and which is implemented in the RINLA software. The flexibility of the model is illustrated in an application to spatiotemporal modeling of global temperature data.
Recurrent Flow Networks A Recurrent Latent Variable Model for Density Modelling of Urban Mobility ; Mobilityondemand MoD systems represent a rapidly developing mode of transportation wherein travel requests are dynamically handled by a coordinated fleet of vehicles. Crucially, the efficiency of an MoD system highly depends on how well supply and demand distributions are aligned in spatiotemporal space i.e., to satisfy user demand, cars have to be available in the correct place and at the desired time. To do so, we argue that predictive models should aim to explicitly disentangle between temporal and spatial variability in the evolution of urban mobility demand. However, current approaches typically ignore this distinction by either treating both sources of variability jointly, or completely ignoring their presence in the first place. In this paper, we propose recurrent flow networks RFN, where we explore the inclusion of i latent random variables in the hidden state of recurrent neural networks to model temporal variability, and ii normalizing flows to model the spatial distribution of mobility demand. We demonstrate how predictive models explicitly disentangling between spatial and temporal variability exhibit several desirable properties, and empirically show how this enables the generation of distributions matching potentially complex urban topologies.
Structure Learning for Cyclic Linear Causal Models ; We consider the problem of structure learning for linear causal models based on observational data. We treat models given by possibly cyclic mixed graphs, which allow for feedback loops and effects of latent confounders. Generalizing related work on bowfree acyclic graphs, we assume that the underlying graph is simple. This entails that any two observed variables can be related through at most one direct causal effect and that confoundinginduced correlation between error terms in structural equations occurs only in absence of direct causal effects. We show that, despite new subtleties in the cyclic case, the considered simple cyclic models are of expected dimension and that a previously considered criterion for distributional equivalence of bowfree acyclic graphs has an analogue in the cyclic case. Our result on model dimension justifies in particular scorebased methods for structure learning of linear Gaussian mixed graph models, which we implement via greedy search.
A Discrete Event Simulation Model for Coordinating Inventory Management and Material Handling in Hospitals ; For operating rooms ORs and hospitals, inventory management of surgical instruments and material handling decisions of perioperative services are critical to hospitals' service levels and costs. However, efficiently integrating these decisions is challenging due to hospitals' interdependence and the uncertainties they face. These challenges motivated the development of this study to answer the following research questions R1 How does the inventory level of surgical instruments, including owned, borrowed and consigned, impact the efficiency of ORs R2 How do material handling activities impact the efficiency of ORs R3 How do integrating decisions about inventory and material handling impact the efficiency of ORs Three discrete event simulation models are developed here to address these questions. Model 1, Current, assumes no coordination of material handling and inventory decisions. Model 2, Two Batch, assumes partial coordination, and Model 3, JustInTime JIT, assumes full coordination. These models are verified and validated using real lifedata from a partnering hospital. A thorough numerical analysis indicates that, in general, coordination of inventory management of surgical instruments and material handling decisions has the potential to improve the efficiency and reduce OR costs. More specifically, a JIT delivery of instruments used in shortduration surgeries leads to lower inventory levels without jeopardizing the service level provided.
Quasiindependence models with rational maximum likelihood estimator ; We classify the twoway independence quasiindependence models or independence models with structural zeros that have rational maximum likelihood estimators, or MLEs. We give a necessary and sufficient condition on the bipartite graph associated to the model for the MLE to be rational. In this case, we give an explicit formula for the MLE in terms of combinatorial features of this graph. We also use the Horn uniformization to show that for general loglinear models mathcalM with rational MLE, any model obtained by restricting to a face of the cone of sufficient statistics of mathcalM also has rational MLE.
MultiAgent Informational Learning Processes ; We introduce a new mathematical model of multiagent reinforcement learning, the MultiAgent Informational Learning Processor MAILP model. The model is based on the notion that agents have policies for a certain amount of information, models how this information iteratively evolves and propagates through many agents. This model is very general, and the only meaningful assumption made is that learning for individual agents progressively slows over time.
Model Predictive Control of a Food Production Unit A Case Study for Lettuce Production ; Plant factories with artificial light are widely researched for food production in a controlled environment. For such control tasks, models of the energy and resource exchange in the production unit as well as those of the plant's growth process may be used. To achieve minimal operation cost, optimal control strategies can be applied to the system, taking into account the availability of resources by control reference specification. A particular advantage of model predictive control MPC is the incorporation of constraints that comply with actuator limitations and general plant growth conditions. In this work, a model of a production unit is derived including a description of the relation between the actuators' electrical signals and the input values to the model. Furthermore, a preliminary model based state tracking control is evaluated for production unit containing Lettuce. It could be observed that the controller is capable to track the reference while satisfying the constraint under changing weather conditions and resource availability.
FinBERT A Pretrained Language Model for Financial Communications ; Contextual pretrained language models, such as BERT Devlin et al., 2019, have made significant breakthrough in various NLP tasks by training on large scale of unlabeled text resources.Financial sector also accumulates large amount of financial communication text.However, there is no pretrained finance specific language models available. In this work,we address the need by pretraining a financial domain specific BERT models, FinBERT, using a large scale of financial communication corpora. Experiments on three financial sentiment classification tasks confirm the advantage of FinBERT over generic domain BERT model. The code and pretrained models are available at httpsgithub.comyya518FinBERT. We hope this will be useful for practitioners and researchers working on financial NLP tasks.
Machine Common Sense ; Machine common sense remains a broad, potentially unbounded problem in artificial intelligence AI. There is a wide range of strategies that can be employed to make progress on this challenge. This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions. The basic idea is that there are several types of commonsense reasoning one is manifested at the logical level of physical actions, the other deals with the understanding of the essence of humanhuman interactions. Existing approaches, based on formal logic and artificial neural networks, allow for modeling only the first type of common sense. To model the second type, it is vital to understand the motives and rules of human behavior. This model is based on reallife heuristics, i.e., the rules of thumb, developed through knowledge and experience of different generations. Such knowledge base allows for development of an expert system with inference and explanatory mechanisms commonsense reasoning algorithms and personal models. Algorithms provide tools for a situation analysis, while personal models make it possible to identify personality traits. The system so designed should perform the function of amplified intelligence for interactions, including humanmachine.
Towards Understanding the Effect of Leak in Spiking Neural Networks ; Spiking Neural Networks SNNs are being explored to emulate the astounding capabilities of human brain that can learn and compute functions robustly and efficiently with noisy spiking activities. A variety of spiking neuron models have been proposed to resemble biological neuronal functionalities. With varying levels of biofidelity, these models often contain a leak path in their internal states, called membrane potentials. While the leaky models have been argued as more bioplausible, a comparative analysis between models with and without leak from a purely computational point of view demands attention. In this paper, we investigate the questions regarding the justification of leak and the pros and cons of using leaky behavior. Our experimental results reveal that leaky neuron model provides improved robustness and better generalization compared to models with no leak. However, leak decreases the sparsity of computation contrary to the common notion. Through a frequency domain analysis, we demonstrate the effect of leak in eliminating the highfrequency components from the input, thus enabling SNNs to be more robust against noisy spikeinputs.
Additive Poisson Process Learning Intensity of HigherOrder Interaction in Stochastic Processes ; We present the Additive Poisson Process APP, a novel framework that can model the higherorder interaction effects of the intensity functions in stochastic processes using lower dimensional projections. Our model combines the techniques in information geometry to model higherorder interactions on a statistical manifold and in generalized additive models to use lowerdimensional projections to overcome the effects from the curse of dimensionality. Our approach solves a convex optimization problem by minimizing the KL divergence from a sample distribution in lower dimensional projections to the distribution modeled by an intensity function in the stochastic process. Our empirical results show that our model is able to use samples observed in the lower dimensional space to estimate the higherorder intensity function with extremely sparse observations.
Model Explanations with Differential Privacy ; Blackbox machine learning models are used in critical decisionmaking domains, giving rise to several calls for more algorithmic transparency. The drawback is that model explanations can leak information about the training data and the explanation data used to generate them, thus undermining data privacy. To address this issue, we propose differentially private algorithms to construct featurebased model explanations. We design an adaptive differentially private gradient descent algorithm, that finds the minimal privacy budget required to produce accurate explanations. It reduces the overall privacy loss on explanation data, by adaptively reusing past differentially private explanations. It also amplifies the privacy guarantees with respect to the training data. We evaluate the implications of differentially private models and our privacy mechanisms on the quality of model explanations.
Logarithmic Voronoi cells ; We study Voronoi cells in the statistical setting by considering preimages of the maximum likelihood estimator that tessellate an open probability simplex. In general, logarithmic Voronoi cells are convex sets. However, for certain algebraic models, namely finite models, models with ML degree 1, linear models, and loglinear or toric models, we show that logarithmic Voronoi cells are polytopes. As a corollary, the algebraic moment map has polytopes for both its fibres and its image, when restricted to the simplex. We also compute nonpolytopal logarithmic Voronoi cells using numerical algebraic geometry. Finally, we determine logarithmic Voronoi polytopes for the finite model consisting of all empirical distributions of a fixed sample size. These polytopes are dual to the logarithmic root polytopes of Lie type A, and we characterize their faces.
Mentari A pipeline to model the galaxy SED using semi analytic models ; We build a theoretical picture of how the light from galaxies evolves across cosmic time. In particular, we predict the evolution of the galaxy spectral energy distribution SED by carefully integrating the star formation and metal enrichment histories of semianalytic model SAM galaxies and combining these with stellar population synthesis models which we call mentari. Our SAM combines prescriptions to model the interplay between gas accretion, star formation, feedback process, and chemical enrichment in galaxy evolution. From this, the SED of any simulated galaxy at any point in its history can be constructed and compared with telescope data to reverse engineer the various physical processes that may have led to a particular set of observations. The synthetic SEDs of millions of simulated galaxies from mentari can cover wavelengths from the far UV to infrared, and thus can tell a near complete story of the history of galaxy evolution. keywordsgalaxies evolution galaxies stellar content galaxies.
CoSE Compositional Stroke Embeddings ; We present a generative model for complex freeform structures such as strokebased drawing tasks. While previous approaches rely on sequencebased models for drawings of basic objects or handwritten text, we propose a model that treats drawings as a collection of strokes that can be composed into complex structures such as diagrams e.g., flowcharts. At the core of the approach lies a novel autoencoder that projects variablelength strokes into a latent space of fixed dimension. This representation space allows a relational model, operating in latent space, to better capture the relationship between strokes and to predict subsequent strokes. We demonstrate qualitatively and quantitatively that our proposed approach is able to model the appearance of individual strokes, as well as the compositional structure of larger diagram drawings. Our approach is suitable for interactive use cases such as autocompleting diagrams. We make code and models publicly available at httpsethait.github.iocose.
Multidimensional Bayesian IRT Model for Hierarchical Latent Structures ; It is reasonable to consider, in many cases, that individuals' latent traits have a hierarchical structure such that more general traits are a suitable composition of more specific ones. Existing item response models that account for such hierarchical structure feature have considerable limitations in terms of modelling andor inference. Motivated by those limitations and the importance of the theme, this paper aims at proposing an improved methodology in terms of both modelling and inference to deal with hierarchically structured latent traits in an item response theory context. From a modelling perspective, the proposed methodology allows for genuinely multidimensional items and all of the latent traits in the assumed hierarchical structure are on the same scale. Items are allowed to be dichotomous or of graded response. An efficient MCMC algorithm is carefully devised to sample from the joint posterior distribution of all the unknown quantities of the proposed model. In particular, all the latent trait parameters are jointly sampled from their full conditional distribution in a Gibbs sampling algorithm. The proposed methodology is applied to simulated data and a real dataset concerning the Enem exam in Brazil.
Eigenstate Entanglement Entropy in Random Quadratic Hamiltonians ; The eigenstate entanglement entropy has been recently shown to be a powerful tool to distinguish integrable from generic quantumchaotic models. In integrable models, a unique feature of the average eigenstate entanglement entropy over all Hamiltonian eigenstates is that the volumelaw coefficient depends on the subsystem fraction. Hence, it deviates from the maximal subsystem fraction independent value encountered in quantumchaotic models. Using random matrix theory for quadratic Hamiltonians, we obtain a closedform expression for the average eigenstate entanglement entropy as a function of the subsystem fraction. We test its correctness against numerical results for the quadratic SachdevYeKitaev model. We also show that it describes the average entanglement entropy of eigenstates of the powerlaw random banded matrix model in the delocalized regime, and that it is close but not the same as the result for quadratic models that exhibit localization in quasimomentum space.
Hidden Markov Nonlinear ICA Unsupervised Learning from Nonstationary Time Series ; Recent advances in nonlinear Independent Component Analysis ICA provide a principled framework for unsupervised feature learning and disentanglement. The central idea in such works is that the latent components are assumed to be independent conditional on some observed auxiliary variables, such as the timesegment index. This requires manual segmentation of data into nonstationary segments which is computationally expensive, inaccurate and often impossible. These models are thus not fully unsupervised. We remedy these limitations by combining nonlinear ICA with a Hidden Markov Model, resulting in a model where a latent state acts in place of the observed segmentindex. We prove identifiability of the proposed model for a general mixing nonlinearity, such as a neural network. We also show how maximum likelihood estimation of the model can be done using the expectationmaximization algorithm. Thus, we achieve a new nonlinear ICA framework which is unsupervised, more efficient, as well as able to model underlying temporal dynamics.
notMIWAE Deep Generative Modelling with Missing not at Random Data ; When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihoodbased inference. We present an approach for building and fitting deep latent variable models DLVMs in cases where the missing process is dependent on the missing data. Specifically, a deep neural network enables us to flexibly model the conditional distribution of the missingness pattern given the data. This allows for incorporating prior information about the type of missingness e.g. selfcensoring into the model. Our inference technique, based on importanceweighted variational inference, involves maximising a lower bound of the joint likelihood. Stochastic gradients of the bound are obtained by using the reparameterisation trick both in latent space and data space. We show on various kinds of data sets and missingness patterns that explicitly modelling the missing process can be invaluable.
SpeakerConditional Chain Model for Speech Separation and Extraction ; Speech separation has been extensively explored to tackle the cocktail party problem. However, these studies are still far from having enough generalization capabilities for real scenarios. In this work, we raise a common strategy named SpeakerConditional Chain Model to process complex speech recordings. In the proposed method, our model first infers the identities of variable numbers of speakers from the observation based on a sequencetosequence model. Then, it takes the information from the inferred speakers as conditions to extract their speech sources. With the predicted speaker information from whole observation, our model is helpful to solve the problem of conventional speech separation and speaker extraction for multiround long recordings. The experiments from standard fullyoverlapped speech separation benchmarks show comparable results with prior studies, while our proposed model gets better adaptability for multiround long recordings.
Neural Machine Translation for Multilingual GraphemetoPhoneme Conversion ; Graphemetophoneme G2P models are a key component in Automatic Speech Recognition ASR systems, such as the ASR system in Alexa, as they are used to generate pronunciations for outofvocabulary words that do not exist in the pronunciation lexicons mappings like e c h o to E k oU. Most G2P systems are monolingual and based on traditional jointsequence based ngram models 1,2. As an alternative, we present a single endtoend trained neural G2P model that shares same encoder and decoder across multiple languages. This allows the model to utilize a combination of universal symbol inventories of Latinlike alphabets and crosslinguistically shared feature representations. Such model is especially useful in the scenarios of low resource languages and code switchingforeign words, where the pronunciations in one language need to be adapted to other locales or accents. We further experiment with word language distribution vector as an additional training target in order to improve system performance by helping the model decouple pronunciations across a variety of languages in the parameter space. We show 7.2 average improvement in phoneme error rate over low resource languages and no degradation over high resource ones compared to monolingual baselines.
Flavour Symmetry Embedded GLoBES FaSEGLoBES ; Neutrino models based on flavour symmetries provide the natural way to explain the origin of tiny neutrino masses. At the dawn of precision measurements of neutrino mixing parameters, neutrino mass models can be constrained and examined by ongoing and upcoming neutrino experiments. We present a supplemental tool Flavour Symmetry Embedded FaSE for General Long Baseline Experiment Simulator GLoBES, and it is available via the link httpsgithub.comtcwphyFASEGLoBES. It can translate the neutrino mass model parameters to standard neutrino oscillation parameters and offer prior functions in a userfriendly way. We demonstrate the robustness of FaSEGLoBE with four examples on how the model parameters can be constrained and even whether the model is excluded by an experiment or not. We wish that this toolkit will facilitate the study of new neutrino mass models in an effecient and effective manner.
Molecule Edit Graph Attention Network Modeling Chemical Reactions as Sequences of Graph Edits ; The central challenge in automated synthesis planning is to be able to generate and predict outcomes of a diverse set of chemical reactions. In particular, in many cases, the most likely synthesis pathway cannot be applied due to additional constraints, which requires proposing alternative chemical reactions. With this in mind, we present Molecule Edit Graph Attention Network MEGAN, an endtoend encoderdecoder neural model. MEGAN is inspired by models that express a chemical reaction as a sequence of graph edits, akin to the arrow pushing formalism. We extend this model to retrosynthesis prediction predicting substrates given the product of a chemical reaction and scale it up to large datasets. We argue that representing the reaction as a sequence of edits enables MEGAN to efficiently explore the space of plausible chemical reactions, maintaining the flexibility of modeling the reaction in an endtoend fashion, and achieving stateoftheart accuracy in standard benchmarks. Code and trained models are made available online at httpsgithub.commoleculeonemegan.
Valid modelfree spatial prediction ; Predicting the response at an unobserved location is a fundamental problem in spatial statistics. Given the difficulty in modeling spatial dependence, especially in nonstationary cases, modelbased prediction intervals are at risk of misspecification bias that can negatively affect their validity. Here we present a new approach for modelfree nonparametric spatial prediction based on the conformal prediction machinery. Our key observation is that spatial data can be treated as exactly or approximately exchangeable in a wide range of settings. In particular, under an infill asymptotic regime, we prove that the response values are, in a certain sense, locally approximately exchangeable for a broad class of spatial processes, and we develop a local spatial conformal prediction algorithm that yields valid prediction intervals without strong model assumptions like stationarity. Numerical examples with both real and simulated data confirm that the proposed conformal prediction intervals are valid and generally more efficient than existing modelbased procedures for large datasets across a range of nonstationary and nonGaussian settings.
A multicomponent discrete Boltzmann model for nonequilibrium reactive flows ; We propose a multicomponent discrete Boltzmann model DBM for premixed, nonpremixed, or partially premixed nonequilibrium reactive flows. This model is suitable for both subsonic and supersonic flows with or without chemical reaction andor external force. A twodimensional sixteenvelocity model is constructed for the DBM. In the hydrodynamic limit, the DBM recovers the modified NavierStokes equations for reacting species in a force field. Compared to standard lattice Boltzmann models, the DBM presents not only more accurate hydrodynamic quantities, but also detailed nonequilibrium effects that are essential yet longneglected by traditional fluid dynamics. Apart from nonequilibrium terms viscous stress and heat flux in conventional models, specific hydrodynamic and thermodynamic nonequilibrium quantities high order kinetic moments and their departure from equilibrium are dynamically obtained from the DBM in a straightforward way. Due to its generality, the developed methodology is applicable to a wide range of phenomena across many energy technologies, emissions reduction, environmental protection, mining accident prevention, chemical and process industry.
Improved phasefield models of melting and dissolution in multicomponent flows ; We develop and analyse the first secondorder phasefield model to combine melting and dissolution in multicomponent flows. This provides a simple and accurate way to simulate challenging phasechange problems in existing codes. Phasefield models simplify computation by describing separate regions using a smoothed phase field. The phase field eliminates the need for complicated discretisations that track the moving phase boundary. However standard phasefield models are only firstorder accurate. They often incur an error proportional to the thickness of the diffuse interface. We eliminate this dominant error by developing a general framework for asymptotic analysis of diffuseinterface methods in arbitrary geometries. With this framework we can consistently unify previous secondorder phasefield models of melting and dissolution and the volumepenalty method for fluidsolid interaction. We finally validate secondorder convergence of our model in two comprehensive benchmark problems using the opensource spectral code Dedalus.
Natural Backdoor Attack on Text Data ; Recently, advanced NLP models have seen a surge in the usage of various applications. This raises the security threats of the released models. In addition to the clean models' unintentional weaknesses, em i.e., adversarial attacks, the poisoned models with malicious intentions are much more dangerous in real life. However, most existing works currently focus on the adversarial attacks on NLP models instead of positioning attacks, also named textitbackdoor attacks. In this paper, we first propose the textitnatural backdoor attacks on NLP models. Moreover, we exploit the various attack strategies to generate trigger on text data and investigate different types of triggers based on modification scope, human recognition, and special cases. Last, we evaluate the backdoor attacks, and the results show the excellent performance of with 100 backdoor attacks success rate and sacrificing of 0.83 on the text classification task.
Multisensory Integration in a QuantumLike Robot Perception Model ; Formalisms inspired by Quantum theory have been used in Cognitive Science for decades. Indeed, QuantumLike QL approaches provide descriptive features that are inherently suitable for perception, cognition, and decision processing. A preliminary study on the feasibility of a QL robot perception model has been carried out for a robot with limited sensing capabilities. In this paper, we generalize such a model for multisensory inputs, creating a multidimensional world representation directly based on sensor readings. Given a 3dimensional case study, we highlight how this model provides a compact and elegant representation, embodying features that are extremely useful for modeling uncertainty and decision. Moreover, the model enables to naturally define query operators to inspect any world state, which answers quantifies the robot's degree of belief on that state.
SelfSupervised Learning of a BiologicallyInspired Visual Texture Model ; We develop a model for representing visual texture in a lowdimensional feature space, along with a novel selfsupervised learning objective that is used to train it on an unlabeled database of texture images. Inspired by the architecture of primate visual cortex, the model uses a first stage of oriented linear filters corresponding to cortical area V1, consisting of both rectified units simple cells and pooled phaseinvariant units complex cells. These responses are processed by a second stage analogous to cortical area V2 consisting of convolutional filters followed by halfwave rectification and pooling to generate V2 'complex cell' responses. The second stage filters are trained on a set of unlabeled homogeneous texture images, using a novel contrastive objective that maximizes the distance between the distribution of V2 responses to individual images and the distribution of responses across all images. When evaluated on texture classification, the trained model achieves substantially greater dataefficiency than a variety of deep hierarchical model architectures. Moreover, we show that the learned model exhibits stronger representational similarity to texture responses of neural populations recorded in primate V2 than pretrained deep CNNs.
Deep brain state classification of MEG data ; Neuroimaging techniques have shown to be useful when studying the brain's activity. This paper uses Magnetoencephalography MEG data, provided by the Human Connectome Project HCP, in combination with various deep artificial neural network models to perform brain decoding. More specifically, here we investigate to which extent can we infer the task performed by a subject based on its MEG data. Three models based on compact convolution, combined convolutional and long shortterm architecture as well as a model based on multiview learning that aims at fusing the outputs of the two stream networks are proposed and examined. These models exploit the spatiotemporal MEG data for learning new representations that are used to decode the relevant tasks across subjects. In order to realize the most relevant features of the input signals, two attention mechanisms, i.e. self and global attention, are incorporated in all the models. The experimental results of cross subject multiclass classification on the studied MEG dataset show that the inclusion of attention improves the generalization of the models across subjects.
Parameter identifiability for a profile mixture model of protein evolution ; A Profile Mixture Model is a model of protein evolution, describing sequence data in which sites are assumed to follow many related substitution processes on a single evolutionary tree. The processes depend in part on different amino acid distributions, or profiles, varying over sites in aligned sequences. A fundamental question for any stochastic model, which must be answered positively to justify modelbased inference, is whether the parameters are identifiable from the probability distribution they determine. Here we show that a Profile Mixture Model has identifiable parameters under circumstances in which it is likely to be used for empirical analyses. In particular, for a tree relating 9 or more taxa, both the tree topology and all numerical parameters are generically identifiable when the number of profiles is less than 74.
Emergent behaviors of CuckerSmale flocks on the hyperboloid ; We study emergent behaviors of CuckerSmaleCS flocks on the hyperboloid mathbbHd in any dimensions. In a recent work citeHHKKM, a firstorder aggregation model on the hyperboloid was proposed and its emergent dynamics was analyzed in terms of initial configuration and system parameters. In this paper, we are interested in the secondorder modeling of CuckerSmale flocks on the hyperboloid. For this, we derive our secondorder model from the abstract CS model on complete and smooth Riemannian manifolds by explicitly calculating the geodesic and parallel transport. Velocity alignment has been shown by combining general velocity alignment estimates for the abstract CS model on manifolds and verifications of a priori estimate of second derivative of energy functional. For the twodimensional case mathbbH2, similar to the recent result in citeAHS, asymptotic flocking admits only two types of asymptotic scenarios, either convergence to a rest state or a state lying on the same plane coplanar state. We also provide several numerical simulations to illustrate an aforementioned dichotomy on the asymptotic dynamics of the hyperboloid CS model on mathbbH2.
Did the Indian lockdown avert deaths ; Within the context of SEIR models, we consider a lockdown that is both imposed and lifted at an early stage of an epidemic. We show that, in these models, although such a lockdown may delay deaths, it eventually does not avert a significant number of fatalities. Therefore, in these models, the efficacy of a lockdown cannot be gauged by simply comparing figures for the deaths at the end of the lockdown with the projected figure for deaths by the same date without the lockdown. We provide a simple but robust heuristic argument to explain why this conclusion should generalize to more elaborate compartmental models. We qualitatively discuss some important effects of a lockdown, which go beyond the scope of simple models, but could cause it to increase or decrease an epidemic's final toll. Given the significance of these effects in India, and the limitations of currently available data, we conclude that simple epidemiological models cannot be used to reliably quantify the impact of the Indian lockdown on fatalities caused by the COVID19 pandemic.
Phenomenological model of motility by spatiotemporal modulation of active interactions ; Transport at microscopic length scales is essential in biological systems and various technologies, including microfluidics. Recent experiments achieved selforganized transport phenomena in microtubule active matter using light to modulate motorprotein activity in time and space. Here, we introduce a novel phenomenological model to explain such experiments. Our model, based on spatially modulated particle interactions, reveals a possible mechanism for emergent transport phenomena in lightcontrolled active matter, including motility and contraction. In particular, the model's analytic treatment elucidates the conservation of the center of mass of activated particles as a fundamental mechanism of material transport and demonstrates the necessity of memory for sustained motility. Furthermore, we generalize the model to explain other phenomena, like microtubule asteraster interactions induced by more complicated activation geometries. Our results demonstrate that the model provides a possible foundation for the phenomenological understanding of lightcontrolled active matter, and it will enable the design and optimization of transport protocols for active matter devices.
Longrange multiscalar models at three loops ; We compute the threeloop beta functions of longrange multiscalar models with general quartic interactions. The longrange nature of the models is encoded in a kinetic term with a Laplacian to the power 0zeta1, rendering the computation of Feynman diagrams much harder than in the usual shortrange case zeta1. As a consequence, previous results stopped at two loops, while sixloop results are available for shortrange models. We push the renormalization group analysis to three loops, in an epsilon4zetad expansion at fixed dimension d4, extensively using the MellinBarnes representation of Feynman amplitudes in the Schwinger parametrization. We then specialize the beta functions to various models with different symmetry groups ON, mathbbZ2N rtimes SN, and ONtimes OM. For such models, we compute the fixed points and critical exponents.
Federated Learning of User Authentication Models ; Machine learningbased User Authentication UA models have been widely deployed in smart devices. UA models are trained to map input data of different users to highly separable embedding vectors, which are then used to accept or reject new inputs at test time. Training UA models requires having direct access to the raw inputs and embedding vectors of users, both of which are privacysensitive information. In this paper, we propose Federated User Authentication FedUA, a framework for privacypreserving training of UA models. FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs. It also allows users to generate their embeddings as random binary vectors, so that, unlike the existing approach of constructing the spread out embeddings by the server, the embedding vectors are kept private as well. We show our method is privacypreserving, scalable with number of users, and allows new users to be added to training without changing the output layer. Our experimental results on the VoxCeleb dataset for speaker verification shows our method reliably rejects data of unseen users at very high true positive rates.
varTestnlme an R package for Variance Components Testing in Linear and Nonlinear Mixedeffects Models ; The issue of variance components testing arises naturally when building mixedeffects models, to decide which effects should be modeled as fixed or random. While tests for fixed effects are available in R for models fitted with lme4, tools are missing when it comes to random effects. The varTestnlme package for R aims at filling this gap. It allows to test whether any subset of the variances and covariances corresponding to any subset of the random effects, are equal to zero using asymptotic property of the likelihood ratio test statistic. It also offers the possibility to test simultaneously for fixed effects and variance components. It can be used for linear, generalized linear or nonlinear mixedeffects models fitted via lme4, nlme or saemix. Theoretical properties of the used likelihood ratio test are recalled, numerical methods used to implement the test procedure are detailed and examples based on different real datasets using different mixed models are provided.
Community Network AutoRegression for HighDimensional Time Series ; Modeling responses on the nodes of a largescale network is an important task that arises commonly in practice. This paper proposes a community network vector autoregressive CNAR model, which utilizes the network structure to characterize the dependence and intracommunity homogeneity of the high dimensional time series. The CNAR model greatly increases the flexibility and generality of the network vector autoregressive Zhu et al, 2017, NAR model by allowing heterogeneous network effects across different network communities. In addition, the noncommunityrelated latent factors are included to account for unknown crosssectional dependence. The number of network communities can diverge as the network expands, which leads to estimating a diverging number of model parameters. We obtain a set of stationary conditions and develop an efficient twostep weighted leastsquares estimator. The consistency and asymptotic normality properties of the estimators are established. The theoretical results show that the twostep estimator improves the onestep estimator by an order of magnitude when the error admits a factor structure. The advantages of the CNAR model are further illustrated on a variety of synthetic and real datasets.
Probabilistic cellular automata for interacting fermionic quantum field theories ; A classical local cellular automaton can describe an interacting quantum field theory for fermions. We construct a simple classical automaton for a particular version of the Thirring model with imaginary coupling. This interacting fermionic quantum field theory obeys a unitary time evolution and shows all properties of quantum mechanics. Classical cellular automata with probabilistic initial conditions admit a description in the formalism of quantum mechanics. Our model exhibits interesting features as spontaneous symmetry breaking or solitons. The same model can be formulated as a generalized Ising model. This euclidean lattice model can be investigated by standard techniques of statistical physics as Monte Carlo simulations. Our model is an example how quantum mechanics emerges from classical statistics.
GinzburgLandau amplitude equation for nonlinear nonlocal models ; Regular spatial structures emerge in a wide range of different dynamics characterized by local andor nonlocal coupling terms. In several research fields this has spurred the study of many models, which can explain pattern formation. The modulations of patterns, occurring on long spatial and temporal scales, can not be captured by linear approximation analysis. Here, we show that, starting from a general model with long range couplings displaying patterns, the spatiotemporal evolution of large scale modulations at the onset of instability is ruled by the wellknown GinzburgLandau equation, independently of the details of the dynamics. Hence, we demonstrate the validity of such equation in the description of the behavior of a wide class of systems. We introduce a novel mathematical framework that is also able to retrieve the analytical expressions of the coefficients appearing in the GinzburgLandau equation as functions of the model parameters. Such framework can include higher order nonlocal interactions and has much larger applicability than the model considered here, possibly including pattern formation in models with very different physical features.
Consistency of Cubic Galileon Cosmology ModelIndependent Bounds from Background Expansion and Perturbative Analyses ; We revisit the cosmological dynamics of the cubic Galileon model in light of the recently proposed modelindependent analyses of the Pantheon supernova data. At the background level, it is shown to be compatible with data and preferred over standard quintessence models. Furthermore, the model is shown to be consistent with the transPlanckian censorship conjecture as well as other Swampland conjectures. It is shown that for the given parametrization, the model fails to satisfy the bounds on the reconstructed growth index derived from the Pantheon data set at the level of linearperturbations.
Unifying Holographic Inflation with Holographic Dark Energy a Covariant Approach ; In the present paper, we use the holographic approach to describe the earlytime acceleration and the latetime acceleration eras of our Universe in a unified manner. Such holographic unification'' is found to have a correspondence with various higher curvature cosmological models with or without matter fields. The corresponding holographic cutoffs are determined in terms of the particle horizon and its derivatives, or the future horizon and its derivatives. As a result, the holographic energy density we propose is able to merge various cosmological epochs of the Universe from a holographic point of view. We find the holographic correspondence of several FR gravity models, including axionFR gravity models, of several GaussBonnet FG models and finally of FT models, and in each case we demonstrate that it is possible to describe in a unified way inflation and latetime acceleration in the context of the same holographic model.
The Effect of Multiple Access Categories on the MAC Layer Performance of IEEE 802.11p ; The enhanced distributed channel access EDCA mechanism enables IEEE 802.11p to accommodate differential quality of service QoS levels in vehicletovehicle V2V communications, through four access categories ACs. This paper presents multidimensional discretetime Markov chain DTMC based model to study the effect of parallel operation of the ACs on the medium access control MAC layer performance of ITSG5 IEEE 802.11p. The overall model consists of four queue models with their respective traffic generators, which are appropriately linked with the DTMCs modeling the operation of each AC. Closedform solutions for the steadystate probabilities of the models are obtained, which are then utilized to derive expressions for key performance indicators at the MAC layer. An application for a highway scenario is presented to draw insights on the performance. The results show how the performance measures vary among ACs according to their priority levels, and emphasize the importance of analytical modeling of the parallel operation of all four ACs.
Mono vs Multilingual Transformerbased Models a Comparison across Several Language Tasks ; BERT Bidirectional Encoder Representations from Transformers and ALBERT A Lite BERT are methods for pretraining language models which can later be finetuned for a variety of Natural Language Understanding tasks. These methods have been applied to a number of such tasks mostly in English, achieving results that outperform the stateoftheart. In this paper, our contribution is twofold. First, we make available our trained BERT and Albert model for Portuguese. Second, we compare our monolingual and the standard multilingual models using experiments in semantic textual similarity, recognizing textual entailment, textual category classification, sentiment analysis, offensive comment detection, and fake news detection, to assess the effectiveness of the generated language representations. The results suggest that both monolingual and multilingual models are able to achieve stateoftheart and the advantage of training a single language model, if any, is small.
The Sensitivity of Power System Expansion Models ; Power system expansion models are a widely used tool for planning powersystems, especially considering the integration of large shares of renewableresources. The backbone of these models is an optimization problem, whichdepends on a number of economic and technical parameters. Although theseparameters contain significant uncertainties, the sensitivity of power systemmodels to these uncertainties is barely investigated. In this work, we introduce a novel method to quantify the sensitivity ofpower system models to different model parameters based on measuring theadditional cost arising from misallocating generation capacities. The value ofthis method is proven by three prominent test cases the definition of capitalcost, different weather periods and different spatial and temporal resolutions.We find that the model is most sensitive to the temporal resolution. Furthermore, we explain why the spatial resolution is of minor importance andwhy the underlying weather data should be chosen carefully.
Jacobi sigma models ; We introduce a twodimensional sigma model associated with a Jacobi manifold. The model is a generalisation of a Poisson sigma model providing a topological open string theory. In the Hamiltonian approach first class constraints are derived, which generate gauge invariance of the model under diffeomorphisms. The reduced phase space is finitedimensional. By introducing a metric tensor on the target, a nontopological sigma model is obtained, yielding a Polyakov action with metric and Bfield, whose target space is a Jacobi manifold.
Dynamical analysis of cosmological models with nonAbelian gauge vector fields ; In this paper we study some models where nonAbelian gauge vector fields endowed with a SU2 group representation are the unique source of inflation and dark energy. These models were first introduced under the name of gaugeflation and gaugessence, respectively. Although several realizations of these models have been discussed, not all available parameters and initial conditions are known. In this work, we use a dynamical system approach to find the full parameter space of the massive version of each model. In particular, we found that the inclusion of the mass term increases the length of the inflationary period. Additionally, the mass term implies new behaviors for the equation of state of dark energy allowing to distinguish this from other prototypical models of accelerated expansion. We show that an axially symmetric gauge field can support an anisotropic accelerated expansion within the observational bounds.
Tuning the topology of pwave superconductivity in an analytically solvable twoband model ; We introduce and solve a twoband model of spinless fermions with pxwave pairing on a square lattice. The model reduces to the wellknown extended HarperHofstadter model with halfflux quanta per plaquette and weakly coupled Kitaev chains in two respective limits. We show that its phase diagram contains a topologically nontrivial weak pairing phase as well as a trivial strong pairing phase as the ratio of the pairing amplitude and hopping is tuned. Introducing periodic driving to the model, we observe a cascade of Floquet phases with well defined quasienergy gaps and featuring chiral Majorana edge modes at the zero or pigap, or both. Dynamical topological invariants are obtained to characterize each phase and to explain the emergence of edge modes in the anomalous phase where all the quasienergy bands have zero Chern number. Analytical solution is achieved by exploiting a generalized mirror symmetry of the model, so that the effective Hamiltonian is decomposed into that of spin12 in magnetic field, and the loop unitary operator becomes spin rotations. We further show the dynamical invariants manifest as the Hopf linking numbers.
Intermittent Demand Forecasting with Renewal Processes ; Intermittency is a common and challenging problem in demand forecasting. We introduce a new, unified framework for building intermittent demand forecasting models, which incorporates and allows to generalize existing methods in several directions. Our framework is based on extensions of wellestablished modelbased methods to discretetime renewal processes, which can parsimoniously account for patterns such as aging, clustering and quasiperiodicity in demand arrivals. The connection to discretetime renewal processes allows not only for a principled extension of Crostontype models, but also for an natural inclusion of neural network based modelsby replacing exponential smoothing with a recurrent neural network. We also demonstrate that modeling continuoustime demand arrivals, i.e., with a temporal point process, is possible via a trivial extension of our framework. This leads to more flexible modeling in scenarios where data of individual purchase orders are directly available with granular timestamps. Complementing this theoretical advancement, we demonstrate the efficacy of our framework for forecasting practice via an extensive empirical study on standard intermittent demand data sets, in which we report predictive accuracy in a variety of scenarios that compares favorably to the state of the art.
Lifelong Language Knowledge Distillation ; It is challenging to perform lifelong language learning LLL on a stream of different tasks without any performance degradation comparing to the multitask counterparts. To address this issue, we present Lifelong Language Knowledge Distillation L2KD, a simple but efficient method that can be easily applied to existing LLL architectures in order to mitigate the degradation. Specifically, when the LLL model is trained on a new task, we assign a teacher model to first learn the new task, and pass the knowledge to the LLL model via knowledge distillation. Therefore, the LLL model can better adapt to the new task while keeping the previously learned knowledge. Experiments show that the proposed L2KD consistently improves previous stateoftheart models, and the degradation comparing to multitask models in LLL tasks is well mitigated for both sequence generation and text classification tasks.
Splitting Gaussian Process Regression for Streaming Data ; Gaussian processes offer a flexible kernel method for regression. While Gaussian processes have many useful theoretical properties and have proven practically useful, they suffer from poor scaling in the number of observations. In particular, the cubic time complexity of updating standard Gaussian process models make them generally unsuitable for application to streaming data. We propose an algorithm for sequentially partitioning the input space and fitting a localized Gaussian process to each disjoint region. The algorithm is shown to have superior time and space complexity to existing methods, and its sequential nature permits application to streaming data. The algorithm constructs a model for which the time complexity of updating is tightly bounded above by a prespecified parameter. To the best of our knowledge, the model is the first local Gaussian process regression model to achieve linear memory complexity. Theoretical continuity properties of the model are proven. We demonstrate the efficacy of the resulting model on multidimensional regression tasks for streaming data.
Like hiking You probably enjoy nature Personagrounded Dialog with Commonsense Expansions ; Existing personagrounded dialog models often fail to capture simple implications of given persona descriptions, something which humans are able to do seamlessly. For example, stateoftheart models cannot infer that interest in hiking might imply love for nature or longing for a break. In this paper, we propose to expand available persona sentences using existing commonsense knowledge bases and paraphrasing resources to imbue dialog models with access to an expanded and richer set of persona descriptions. Additionally, we introduce finegrained grounding on personas by encouraging the model to make a discrete choice among persona sentences while synthesizing a dialog response. Since such a choice is not observed in the data, we model it using a discrete latent random variable and use variational learning to sample from hundreds of persona expansions. Our model outperforms competitive baselines on the PersonaChat dataset in terms of dialog quality and diversity while achieving personaconsistent and controllable dialog generation.
Further results on the estimation of dynamic panel logit models with fixed effects ; Kitazawa 2013, 2016 showed that the common parameters in the panel logit AR1 model with strictly exogenous covariates and fixed effects are estimable at the rootn rate using the Generalized Method of Moments. Honor'e and Weidner 2020 extended his results in various directions they found additional moment conditions for the logit AR1 model and also considered estimation of logit ARp models with p1. In this note we prove a conjecture in their paper and show that for given values of the initial condition, the covariates and the common parameters 2T2T of their moment functions for the logit AR1 model are linearly independent and span the set of valid moment functions, which is a 2T2Tdimensional linear subspace of the 2Tdimensional vector space of real valued functions over the outcomes y element of 0,1T. We also prove that when p2 and T element of 3,4,5, there are, respectively, 2T4T1 and 2T3T2 linearly independent moment functions for the panel logit AR2 models with and without covariates.
Is Plugin Solver SampleEfficient for Featurebased Reinforcement Learning ; It is believed that a modelbased approach for reinforcement learning RL is the key to reduce sample complexity. However, the understanding of the sample optimality of modelbased RL is still largely missing, even for the linear case. This work considers sample complexity of finding an epsilonoptimal policy in a Markov decision process MDP that admits a linear additive feature representation, given only access to a generative model. We solve this problem via a plugin solver approach, which builds an empirical model and plans in this empirical model via an arbitrary plugin solver. We prove that under the anchorstate assumption, which implies implicit nonnegativity in the feature space, the minimax sample complexity of finding an epsilonoptimal policy in a gammadiscounted MDP is OK1gamma3epsilon2, which only depends on the dimensionality K of the feature space and has no dependence on the state or action space. We further extend our results to a relaxed setting where anchorstates may not exist and show that a plugin approach can be sample efficient as well, providing a flexible approach to design modelbased algorithms for RL.
Controllable Pareto MultiTask Learning ; A multitask learning MTL system aims at solving multiple related tasks at the same time. With a fixed model capacity, the tasks would be conflicted with each other, and the system usually has to make a tradeoff among learning all of them together. For many realworld applications where the tradeoff has to be made online, multiple models with different preferences over tasks have to be trained and stored. This work proposes a novel controllable Pareto multitask learning framework, to enable the system to make realtime tradeoff control among different tasks with a single model. To be specific, we formulate the MTL as a preferenceconditioned multiobjective optimization problem, with a parametric mapping from preferences to the corresponding tradeoff solutions. A single hypernetworkbased multitask neural network is built to learn all tasks with different tradeoff preferences among them, where the hypernetwork generates the model parameters conditioned on the preference. For inference, MTL practitioners can easily control the model performance based on different tradeoff preferences in realtime. Experiments on different applications demonstrate that the proposed model is efficient for solving various MTL problems.
Humaninterpretable model explainability on highdimensional data ; The importance of explainability in machine learning continues to grow, as both neuralnetwork architectures and the data they model become increasingly complex. Unique challenges arise when a model's input features become high dimensional on one hand, principled modelagnostic approaches to explainability become too computationally expensive; on the other, more efficient explainability algorithms lack natural interpretations for general users. In this work, we introduce a framework for humaninterpretable explainability on highdimensional data, consisting of two modules. First, we apply a semantically meaningful latent representation, both to reduce the raw dimensionality of the data, and to ensure its human interpretability. These latent features can be learnt, e.g. explicitly as disentangled representations or implicitly through imagetoimage translation, or they can be based on any computable quantities the user chooses. Second, we adapt the Shapley paradigm for modelagnostic explainability to operate on these latent features. This leads to interpretable model explanations that are both theoretically controlled and computationally tractable. We benchmark our approach on synthetic data and demonstrate its effectiveness on several imageclassification tasks.
On the Importance of Domain Model Configuration for Automated Planning Engines ; The development of domainindependent planners within the AI Planning community is leading to offtheshelf technology that can be used in a wide range of applications. Moreover, it allows a modular approach in which planners and domain knowledge are modules of larger software applications that facilitates substitutions or improvements of individual modules without changing the rest of the system. This approach also supports the use of reformulation and configuration techniques, which transform how a model is represented in order to improve the efficiency of plan generation. In this article, we investigate how the performance of domainindependent planners is affected by domain model configuration, i.e., the order in which elements are ordered in the model, particularly in the light of planner comparisons. We then introduce techniques for the online and offline configuration of domain models, and we analyse the impact of domain model configuration on other reformulation approaches, such as macros.
Federated Learning in Adversarial Settings ; Federated Learning enables entities to collaboratively learn a shared prediction model while keeping their training data locally. It prevents data collection and aggregation and, therefore, mitigates the associated privacy risks. However, it still remains vulnerable to various security attacks where malicious participants aim at degrading the generated model, inserting backdoors, or inferring other participants' training data. This paper presents a new federated learning scheme that provides different tradeoffs between robustness, privacy, bandwidth efficiency, and model accuracy. Our scheme uses biased quantization of model updates and hence is bandwidth efficient. It is also robust against stateoftheart backdoor as well as model degradation attacks even when a large proportion of the participant nodes are malicious. We propose a practical differentially private extension of this scheme which protects the whole dataset of participating entities. We show that this extension performs as efficiently as the nonprivate but robust scheme, even with stringent privacy requirements but are less robust against model degradation and backdoor attacks. This suggests a possible fundamental tradeoff between Differential Privacy and robustness.
Matter and dark matter asymmetry from a composite Higgs model ; We propose a low scale leptogenesis scenario in the framework of composite Higgs models supplemented with singlet heavy neutrinos. One of the neutrinos can also be considered as a dark matter candidate whose stability is guaranteed by a discrete mathbbZ2 symmetry of the model. In the spectrum of the strongly coupled system, bound states heavier than the pseudo NambuGoldstone Higgs boson can exist. Due to the decay of these states to heavy righthanded neutrinos, an asymmetry in the visible and dark sector is simultaneously generated. The resulting asymmetry is transferred to the standard model leptons which interact with visible righthanded neutrinos. We show that the sphaleroninduced baryon asymmetry can be provided at the TeV scale for resonant bound states. Depending on the coupling strength of dark neutrino interaction, a viable range of the dark matter mass is allowed in the model. Furthermore, taking into account the effective interactions of dark matter, we discuss lowenergy processes and experiments.
Statistical Analysis of the LMS Algorithm for Proper and Improper Gaussian Processes ; The LMS algorithm is one of the most widely used techniques in adaptive filtering. Accurate modeling of the algorithm in various circumstances is paramount to achieving an efficient adaptive Wiener filter design process. In the recent decades, concerns have been raised on studying improper signals and providing an accurate model of the LMS algorithm for both proper and improper signals. Other models for the LMS algorithm for improper signals available in the scientific literature either make use of the independence assumptions regarding the desired signal and the input signal vector, or are exclusive to proper signals; it is shown that by not considering these assumptions a more general model can be derived. In the presented simulations it is possible to verify that the model introduced in this paper outperforms the other available models.
Linking Entities to Unseen Knowledge Bases with Arbitrary Schemas ; In entity linking, mentions of named entities in raw text are disambiguated against a knowledge base KB. This work focuses on linking to unseen KBs that do not have training data and whose schema is unknown during training. Our approach relies on methods to flexibly convert entities from arbitrary KBs with several attributevalue pairs into flat strings, which we use in conjunction with stateoftheart models for zeroshot linking. To improve the generalization of our model, we use two regularization schemes based on shuffling of entity attributes and handling of unseen attributes. Experiments on English datasets where models are trained on the CoNLL dataset, and tested on the TACKBP 2010 dataset show that our models outperform baseline models by over 12 points of accuracy. Unlike prior work, our approach also allows for seamlessly combining multiple training datasets. We test this ability by adding both a completely different dataset Wikia, as well as increasing amount of training data from the TACKBP 2010 training set. Our models perform favorably across the board.
Concealed Data Poisoning Attacks on NLP Models ; Adversarial attacks alter NLP model predictions by perturbing testtime inputs. However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data. In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input. For instance, we insert 50 poison examples into a sentiment model's training set that causes the model to frequently predict Positive whenever the input contains James Bond. Crucially, we craft these poison examples using a gradientbased procedure so that they do not mention the trigger phrase. We also apply our poison attack to language modeling Apple iPhone triggers negative generations and machine translation iced coffee mistranslated as hot coffee. We conclude by proposing three defenses that can mitigate our attack at some cost in prediction accuracy or extra human annotation.