text
stringlengths
62
2.94k
Neural Tensor Factorization ; Neural collaborative filtering NCF and recurrent recommender systems RRN have been successful in modeling useritem relational data. However, they are also limited in their assumption of static or sequential modeling of relational data as they do not account for evolving users' preference over time as well as changes in the underlying factors that drive the change in useritem relationship over time. We address these limitations by proposing a Neural Tensor Factorization NTF model for predictive tasks on dynamic relational data. The NTF model generalizes conventional tensor factorization from two perspectives First, it leverages the long shortterm memory architecture to characterize the multidimensional temporal interactions on relational data. Second, it incorporates the multilayer perceptron structure for learning the nonlinearities between different latent factors. Our extensive experiments demonstrate the significant improvement in rating prediction and link prediction on dynamic relational data by our NTF model over both neural network based factorization models and other traditional methods.
Constraining the Dynamics of Deep Probabilistic Models ; We introduce a novel generative formulation of deep probabilistic models implementing soft constraints on their function dynamics. In particular, we develop a flexible methodological framework where the modeled functions and derivatives of a given order are subject to inequality or equality constraints. We then characterize the posterior distribution over model and constraint parameters through stochastic variational inference. As a result, the proposed approach allows for accurate and scalable uncertainty quantification on the predictions and on all parameters. We demonstrate the application of equality constraints in the challenging problem of parameter inference in ordinary differential equation models, while we showcase the application of inequality constraints on the problem of monotonic regression of count data. The proposed approach is extensively tested in several experimental settings, leading to highly competitive results in challenging modeling applications, while offering high expressiveness, flexibility and scalability.
Variational Autoencoders for Collaborative Filtering ; We extend variational autoencoders VAEs to collaborative filtering for implicit feedback. This nonlinear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research.We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has informationtheoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several stateoftheart baselines, including two recentlyproposed neural network approaches, on several realworld datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.
On the exact solvability of the anisotropic central spin model An operator approach ; Using an operator approach based on a commutator scheme that has been previously applied to Richardson's reduced BCS model and the inhomogeneous Dicke model, we obtain general exact solvability requirements for an anisotropic central spin model with XXZtype hyperfine coupling between the central spin and the spin bath, without any prior knowledge of integrability of the model. We outline the basic steps of the usage of the operator approach, and pedagogically summarize them into two emphLemmas and two emphConstraints. Through a stepbystep construction of the eigenproblem, we show that the condition g'2jgj2c naturally arises for the model to be exactly solvable, where c is a constant independent of the bathspin index j, and gj and g'j are the longitudinal and transverse hyperfine interactions, respectively. The obtained conditions and the resulting Bethe ansatz equations are consistent with that in previous literature.
Numerical Investigation on Local Nonequilibrium Flows Using a Diatomic Nonlinear Constitutive Model ; The linear NavierStokesFourier NSF constitutive relations are capable of simulating the nearcontinuum flows, but fail in description of those flows which are removed far away from local equilibrium. In this paper, a diatomic nonlinear model named as nonlinear coupled constitutive relations NCCR, derived from Eu's generalized hydrodynamics and proposed by Myong, is presented as an alternative for simulating these hypersonic gas flows with a goal of recovering NSF's solutions in continuum regime and being superior in transition regime. To guarantee stable computation, a reliable and efficient coupled algorithm is proposed for this diatomic nonlinear constitutive model. Constitutivecurve analysis is carried out in detail to compare this coupled algorithm with Myong's previous algorithm. Local flow regions are investigated carefully in these hypersonic flows past a cone tip, a hollow cylinderflare and a HTVtype vehicle. The convergent solutions of NCCR model are compared with NSF, DSMC calculations and experiment. It is demonstrated that the NCCR model works as efficiently as the NSF model in continuum regime, but more accurately compared with DSMC and experiment than NSF in nonequilibrium flows. The discrepancies of flow field and surface parameters, imply a potential for remedying NSF's deficiency in local nonequilibrium regions.
DeePCG constructing coarsegrained models via deep neural networks ; We introduce a general framework for constructing coarsegrained potential models without ad hoc approximations such as limiting the potential to two andor threebody contributions. The scheme, called Deep CoarseGrained Potential abbreviated DeePCG, exploits a carefully crafted neural network to construct a manybody coarsegrained potential. The network is trained with full atomistic data in a way that preserves the natural symmetries of the system. The resulting model is very accurate and can be used to sample the configurations of the coarsegrained variables in a much faster way than with the original atomistic model. As an application we consider liquid water and use the oxygen coordinates as the coarsegrained variables, starting from a full atomistic simulation of this system at the abinitio molecular dynamics level. We found that the twobody, threebody and higher order oxygen correlation functions produced by the coarsegrained and full atomistic models agree very well with each other, illustrating the effectiveness of the DeePCG model on a rather challenging task.
A Generalized DiscreteTime Altafini Model ; A discretetime modulus consensus model is considered in which the interaction among a family of networked agents is described by a timedependent gain graph whose vertices correspond to agents and whose arcs are assigned complex numbers from a cyclic group. Limiting behavior of the model is studied using a graphical approach. It is shown that, under appropriate connectedness, a certain type of clustering will be reached exponentially fast for almost all initial conditions if and only if the sequence of gain graphs is repeatedly jointly structurally balanced corresponding to that type of clustering, where the number of clusters is at most the order of a cyclic group. It is also shown that the model will reach a consensus asymptotically at zero if the sequence of gain graphs is repeatedly jointly strongly connected and structurally unbalanced. In the special case when the cyclic group is of order two, the model simplifies to the socalled Altafini model whose gain graph is simply a signed graph.
Attractor and Reheating in a Model with NonCanonical Scalar Fields ; We consider two noncanonical scalar fields tachyon and DBI with Emodel type of the potential. We study cosmological inflation in these models to find possible alphaattractors. We show that similar to the canonical scalar field case, in both tachyon and DBI models there is a value of the scalar spectral index in small alpha limit which is just a function of the efolds number. However, the value of ns in DBI model is somewhat different from the other ones. We also compare the results with Planck2015 TT, TE, EElowP data. The reheating phase after inflation is studied in these models which gives some more constraints on the model's parameters.
Probabilistically Safe Robot Planning with ConfidenceBased Human Predictions ; In order to safely operate around humans, robots can employ predictive models of human motion. Unfortunately, these models cannot capture the full complexity of human behavior and necessarily introduce simplifying assumptions. As a result, predictions may degrade whenever the observed human behavior departs from the assumed structure, which can have negative implications for safety. In this paper, we observe that how rational human actions appear under a particular model can be viewed as an indicator of that model's ability to describe the human's current motion. By reasoning about this model confidence in a realtime Bayesian framework, we show that the robot can very quickly modulate its predictions to become more uncertain when the model performs poorly. Building on recent work in provablysafe trajectory planning, we leverage these confidenceaware human motion predictions to generate assured autonomous robot motion. Our new analysis combines worstcase tracking error guarantees for the physical robot with probabilistic timevarying human predictions, yielding a quantitative, probabilistic safety certificate. We demonstrate our approach with a quadcopter navigating around a human.
A firstprinciples global multiphase equation of state for hydrogen ; We present and discuss a widerange hydrogen equation of state model based on a consistent set of ab initio simulations including quantum protons and electrons. Both the process of constructing this model and its predictions are discussed in detail. The cornerstones of this work are the specification of simple physically motivated free energy models, a general multiparametermultiderivative fitting method, and the use of the most accurate simulation methods to date. The resulting equation of state aims for a global range of validity T 1109 K and Vm 1091 m3mol, as the models are specifically constructed to reproduce exact thermodynamic and mechanical limits. Our model is for the most part analytic or semianalytic and is thermodynamically consistent by construction; the problem of interpolating between distinctly different models often a cause for thermodynamic inconsistencies and spurious discontinuities is avoided entirely.
Outcome identification in electronic health records using predictions from an enriched Dirichlet process mixture ; We propose a novel semiparametric model for the joint distribution of a continuous longitudinal outcome and the baseline covariates using an enriched Dirichlet process EDP prior. This joint model decomposes into a linear mixed model for the outcome given the covariates and marginals for the covariates. The nonparametric EDP prior is placed on the regression and spline coefficients, the error variance, and the parameters governing the predictor space. We predict the outcome at unobserved time points for subjects with data at other time points as well as for new subjects with only baseline covariates. We find improved prediction over mixed models with Dirichlet process DP priors when there are a large number of covariates. Our method is demonstrated with electronic health records consisting of initiators of second generation antipsychotic medications, which are known to increase the risk of diabetes. We use our model to predict laboratory values indicative of diabetes for each individual and assess incidence of suspected diabetes from the predicted dataset. Our model also serves as a functional clustering algorithm in which subjects are clustered into groups with similar longitudinal trajectories of the outcome over time.
Multilingual Neural Machine Translation with TaskSpecific Attention ; Multilingual machine translation addresses the task of translating between multiple source and target languages. We propose taskspecific attention models, a simple but effective technique for improving the quality of sequencetosequence neural multilingual translation. Our approach seeks to retain as much of the parameter sharing generalization of NMT models as possible, while still allowing for languagespecific specialization of the attention model to a particular languagepair or task. Our experiments on four languages of the Europarl corpus show that using a targetspecific model of attention provides consistent gains in translation quality for all possible translation directions, compared to a model in which all parameters are shared. We observe improved translation quality even in the extreme lowresource zeroshot translation directions for which the model never saw explicitly paired parallel data.
Phase transitions in a multistate majorityvote model on complex networks ; We generalize the original majorityvote MV model from two states to arbitrary p states and study the orderdisorder phase transitions in such a pstate MV model on complex networks. By extensive Monte Carlo simulations and a meanfield theory, we show that for pgeq3 the order of phase transition is essentially different from a continuous secondorder phase transition in the original twostate MV model. Instead, for pgeq3 the model displays a discontinuous firstorder phase transition, which is manifested by the appearance of the hysteresis phenomenon near the phase transition. Within the hysteresis loop, the ordered phase and disordered phase are coexisting and rare flips between the two phases can be observed due to the finitesize fluctuation. Moreover, we investigate the type of phase transition under a slightly modified dynamics Melo emphet al. J. Stat. Mech. P11032 2010. We find that the order of phase transition in the threestate MV model depends on the degree heterogeneity of networks. For pgeq4, both dynamics produce the firstorder phase transitions.
Bayesian Inference in Nonparanormal Graphical Models ; Gaussian graphical models have been used to study intrinsic dependence among several variables, but the Gaussianity assumption may be restrictive in many applications. A nonparanormal graphical model is a semiparametric generalization for continuous variables where it is assumed that the variables follow a Gaussian graphical model only after some unknown smooth monotone transformations on each of them. We consider a Bayesian approach in the nonparanormal graphical model by putting priors on the unknown transformations through a random series based on Bsplines where the coefficients are ordered to induce monotonicity. A truncated normal prior leads to partial conjugacy in the model and is useful for posterior simulation using Gibbs sampling. On the underlying precision matrix of the transformed variables, we consider a spikeandslab prior and use an efficient posterior Gibbs sampling scheme. We use the Bayesian Information Criterion to choose the hyperparameters for the spikeandslab prior. We present a posterior consistency result on the underlying transformation and the precision matrix. We study the numerical performance of the proposed method through an extensive simulation study and finally apply the proposed method on a real data set.
Online Parallel Portfolio Selection with Heterogeneous Island Model ; We present an online parallel portfolio selection algorithm based on the island model commonly used for parallelization of evolutionary algorithms. In our case each of the islands runs a different optimization algorithm. The distributed computation is managed by a central planner which periodically changes the running methods during the execution of the algorithm less successful methods are removed while new instances of more successful methods are added. We compare different types of planners in the heterogeneous island model among themselves and also to the traditional homogeneous model on a wide set of problems. The tests include experiments with different representations of the individuals and different duration of fitness function evaluations. The results show that heterogeneous models are a more general and universal computational tool compared to homogeneous models.
Advice Complexity of Priority Algorithms ; The priority model of greedylike algorithms was introduced by Borodin, Nielsen, and Rackoff in 2002. We augment this model by allowing priority algorithms to have access to advice, i.e., side information precomputed by an allpowerful oracle. Obtaining lower bounds in the priority model without advice can be challenging and may involve intricate adversary arguments. Since the priority model with advice is even more powerful, obtaining lower bounds presents additional difficulties. We sidestep these difficulties by developing a general framework of reductions which makes lower bound proofs relatively straightforward and routine. We start by introducing the Pair Matching problem, for which we are able to prove strong lower bounds in the priority model with advice. We develop a template for constructing a reduction from Pair Matching to other problems in the priority model with advice this part is technically challenging since the reduction needs to define a valid priority function for Pair Matching while respecting the priority function for the other problem. Finally, we apply the template to obtain lower bounds for a number of standard discrete optimization problems.
Beta seasonal autoregressive moving average models ; In this paper we introduce the class of beta seasonal autoregressive moving average betaSARMA models for modeling and forecasting time series data that assume values in the standard unit interval. It generalizes the class of beta autoregressive moving average models Rocha and CribariNeto, Test, 2009 by incorporating seasonal dynamics to the model dynamic structure. Besides introducing the new class of models, we develop parameter estimation, hypothesis testing inference, and diagnostic analysis tools. We also discuss outofsample forecasting. In particular, we provide closedform expressions for the conditional score vector and for the conditional Fisher information matrix. We also evaluate the finite sample performances of conditional maximum likelihood estimators and white noise tests using Monte Carlo simulations. An empirical application is presented and discussed.
Identifiability of Gaussian Structural Equation Models with Dependent Errors Having Equal Variances ; In this paper, we prove that some Gaussian structural equation models with dependent errors having equal variances are identifiable from their corresponding Gaussian distributions. Specifically, we prove identifiability for the Gaussian structural equation models that can be represented as AnderssonMadiganPerlman chain graphs Andersson et al., 2001. These chain graphs were originally developed to represent independence models. However, they are also suitable for representing causal models with additive noise Pena, 2016. Our result implies then that these causal models can be identified from observational data alone. Our result generalizes the result by Peters and Buhlmann 2014, who considered independent errors having equal variances. The suitability of the equal error variances assumption should be assessed on a per domain basis.
Schelling Segregation with Strategic Agents ; Schelling's segregation model is a landmark model in sociology. It shows the counterintuitive phenomenon that residential segregation between individuals of different groups can emerge even when all involved individuals are tolerant. Although the model is widely studied, no pure gametheoretic version where rational agents strategically choose their location exists. We close this gap by introducing and analyzing generalized gametheoretic models of Schelling segregation, where the agents can also have individual location preferences. For our models, we investigate the convergence behavior and the efficiency of their equilibria. In particular, we prove guaranteed convergence to an equilibrium in the version which is closest to Schelling's original model. Moreover, we provide tight bounds on the Price of Anarchy.
EMPHASIS An Emotional Phonemebased Acoustic Model for Speech Synthesis System ; We present EMPHASIS, an emotional phonemebased acoustic model for speech synthesis system. EMPHASIS includes a phoneme duration prediction model and an acoustic parameter prediction model. It uses a CBHGbased regression network to model the dependencies between linguistic features and acoustic features. We modify the input and output layer structures of the network to improve the performance. For the linguistic features, we apply a feature grouping strategy to enhance emotional and prosodic features. The acoustic parameters are designed to be suitable for the regression task and waveform reconstruction. EMPHASIS can synthesize speech in realtime and generate expressive interrogative and exclamatory speech with high audio quality. EMPHASIS is designed to be a multilingual model and can synthesize MandarinEnglish speech for now. In the experiment of emotional speech synthesis, it achieves better subjective results than other realtime speech synthesis systems.
Deep CNN Denoiser and Multilayer Neighbor Component Embedding for Face Hallucination ; Most of the current face hallucination methods, whether they are shallow learningbased or deep learningbased, all try to learn a relationship model between LowResolution LR and HighResolution HR spaces with the help of a training set. They mainly focus on modeling image prior through either modelbased optimization or discriminative inference learning. However, when the input LR face is tiny, the learned prior knowledge is no longer effective and their performance will drop sharply. To solve this problem, in this paper we propose a general face hallucination method that can integrate modelbased optimization and discriminative inference. In particular, to exploit the model based prior, the Deep Convolutional Neural Networks CNN denoiser prior is plugged into the superresolution optimization model with the aid of imageadaptive Laplacian regularization. Additionally, we further develop a highfrequency details compensation method by dividing the face image to facial components and performing face hallucination in a multilayer neighbor embedding manner. Experiments demonstrate that the proposed method can achieve promising superresolution results for tiny input LR faces.
Bootstrap Based Inference for Sparse HighDimensional Time Series Models ; Fitting sparse models to highdimensional time series is an important area of statistical inference. In this paper we consider sparse vector autoregressive models and develop appropriate bootstrap methods to infer properties of such processes. Our bootstrap methodology generates pseudo time series using a modelbased bootstrap procedure which involves an estimated, sparsified version of the underlying vector autoregressive model. Inference is performed using socalled desparsified or debiased estimators of the autoregressive model parameters. We derive the asymptotic distribution of such estimators in the time series context and establish asymptotic validity of the bootstrap procedure proposed for estimation and, appropriately modified, for testing purposes. In particular we focus on testing that large groups of autoregressive coefficients equal zero. Our theoretical results are complemented by simulations which investigate the finite sample performance of the bootstrap methodology proposed. A reallife data application is also presented.
A consistent kinetic model for a twocomponent mixture of polyatomic molecules ; We consider a multi component gas mixture with translational and internal energy degrees of freedom assuming that the number of particles of each species remains constant. We will illustrate the derived model in the case of two species, but the model can be easily generalized to multiple species. The two species are allowed to have different degrees of freedom in internal energy and are modelled by a system of kinetic ESBGK equations featuring two interaction terms to account for momentum and energy transfer between the species. We prove consistency of our model conservation properties, positivity of the temperature, Htheorem and convergence to a global equilibrium in the form of a global Maxwell distribution. Thus, we are able to derive the usual macroscopic conservation laws. For numerical purposes we apply the Chu reduction to the developed model for polyatomic gases and give an application for a gas consisting of a mono atomic and a diatomic species.
Nonparametric learning from Bayesian models with randomized objective functions ; Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The modelbased aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes VB. We demonstrate the approach on a number of examples including VB classifiers and Bayesian random forests.
Topological models of arithmetic ; Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic PA that can be represented as langlemathbbQ,oplus,otimesrangle, where oplus and otimes are continuous functions on the rationals mathbbQ. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals mathbbR, the reals in any finite dimension mathbbRn, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
Efficiently Learning Mixtures of Mallows Models ; Mixtures of Mallows models are a popular generative model for ranking data coming from a heterogeneous population. They have a variety of applications including social choice, recommendation systems and natural language processing. Here we give the first polynomial time algorithm for provably learning the parameters of a mixture of Mallows models with any constant number of components. Prior to our work, only the two component case had been settled. Our analysis revolves around a determinantal identity of Zagier which was proven in the context of mathematical physics, which we use to show polynomial identifiability and ultimately to construct test functions to peel off one component at a time. To complement our upper bounds, we show informationtheoretic lower bounds on the sample complexity as well as lower bounds against restricted families of algorithms that make only local queries. Together, these results demonstrate various impediments to improving the dependence on the number of components. They also motivate the study of learning mixtures of Mallows models from the perspective of beyond worstcase analysis. In this direction, we show that when the scaling parameters of the Mallows models have separation, there are much faster learning algorithms.
Statistical modeling for adaptive trait evolution in randomly evolving environment ; In past decades, Gaussian processes has been widely applied in studying trait evolution using phylogenetic comparative analysis. In particular, two members of Gaussian processes Brownian motion and OrnsteinUhlenbeck process, have been frequently used to describe continuous trait evolution. Under the assumption of adaptive evolution, several models have been created around OrnsteinUhlenbeck process where the optimum thetayt of a single trait yt is influenced with predictor xt. Since in general the dynamics of rate of evolution tauyt of trait could adopt a pertinent process, in this work we extend models of adaptive evolution by considering the rate of evolution tauty following the CoxIngersollRoss CIR process. We provide a heuristic Monte Carlo simulation scheme to simulate trait along the phylogeny as a structure of dependence among species. We add a framework to incorporate multiple regression with interaction between optimum of the trait and its potential predictors. Since the likelihood function for our models are intractable, we propose the use of Approximate Bayesian Computation ABC for parameter estimation and inference. Simulation as well as empirical study using the proposed models are also performed and carried out to validate our models and for practical applications.
Millimeterwave Extended NYUSIM Channel Model for Spatial Consistency ; Commonly used dropbased channel models cannot satisfy the requirements of spatial consistency for millimeterwave mmWave channel modeling where transient motion or closelyspaced users need to be considered. A channel model having textitspatial consistency can capture the smooth variations of channels, when a user moves, or when multiple users are close to each other in a local area within, say, 10 m in an outdoor scenario. Spatial consistency is needed to support the testing of beamforming and beam tracking for massive multipleinput and multipleoutput MIMO and multiuser MIMO in fifthgeneration 5G mmWave mobile networks. This paper presents a channel model extension and an associated implementation of spatial consistency in the NYUSIM channel simulation platform. Along with a mathematical model, we use measurements where the user moved along a street and turned at a corner over a path length of 75 m in order to derive realistic values of several key parameters such as correlation distance and the rate of cluster birth and death, that are shown to provide spatial consistency for NYUSIM in an urban microcell street canyon scenario.
Predicting Musical Sophistication from Music Listening Behaviors A Preliminary Study ; Psychological models are increasingly being used to explain online behavioral traces. Aside from the commonly used personality traits as a general user model, more domain dependent models are gaining attention. The use of domain dependent psychological models allows for more finegrained identification of behaviors and provide a deeper understanding behind the occurrence of those behaviors. Understanding behaviors based on psychological models can provide an advantage over datadriven approaches. For example, relying on psychological models allow for ways to personalize when data is scarce. In this preliminary work we look at the relation between users' musical sophistication and their online music listening behaviors and to what extent we can successfully predict musical sophistication. An analysis of data from a study with 61 participants shows that listening behaviors can successfully be used to infer users' musical sophistication.
Learning When to Concentrate or Divert Attention SelfAdaptive Attention Temperature for Neural Machine Translation ; Most of the Neural Machine Translation NMT models are based on the sequencetosequence Seq2Seq model with an encoderdecoder framework equipped with the attention mechanism. However, the conventional attention mechanism treats the decoding at each time step equally with the same matrix, which is problematic since the softness of the attention for different types of words e.g. content words and function words should differ. Therefore, we propose a new model with a mechanism called SelfAdaptive Control of Temperature SACT to control the softness of attention by means of an attention temperature. Experimental results on the ChineseEnglish translation and EnglishVietnamese translation demonstrate that our model outperforms the baseline models, and the analysis and the case study show that our model can attend to the most relevant elements in the sourceside contexts and generate the translation of high quality.
SemanticUnitBased Dilated Convolution for MultiLabel Text Classification ; We propose a novel model for multilabel text classification, which is based on sequencetosequence learning. The model generates higherlevel semantic unit representations with multilevel dilated convolution as well as a corresponding hybrid attention mechanism that extracts both the information at the wordlevel and the level of the semantic unit. Our designed dilated convolution effectively reduces dimension and supports an exponential expansion of receptive fields without loss of local information, and the attentionoverattention mechanism is able to capture more summary relevant information from the source context. Results of our experiments show that the proposed model has significant advantages over the baseline models on the dataset RCV1V2 and RenCECps, and our analysis demonstrates that our model is competitive to the deterministic hierarchical models and it is more robust to classifying lowfrequency labels.
Deciphering the dynamics of EpithelialMesenchymal Transition and Cancer Stem Cells in tumor progression ; Purpose of review The epithelialMesenchymal Transition EMT and the generation of Cancer Stem Cells CSC are two fundamental aspects contributing to tumor growth, acquisition of resistance to therapy, formation of metastases, and tumor relapse. Recent experimental data identifying the circuits regulating EMT and CSCs have driven the development of computational models capturing the dynamics of these circuits and consequently various aspects of tumor progression. Recent findings We review the contribution made by these models in a recapitulating experimentally observed behavior, b making experimentally testable predictions, and c driving emerging notions in the field, including the emphasis on the aggressive potential of hybrid epithelialmesenchymal EM phenotypes. We discuss dynamical and statistical models at intracellular and population level relating to dynamics of EMT and CSCs, and those focusing on interconnections between these two processes. Summary These models highlight the insights gained via mathematical modeling approaches, and emphasizes that the connections between hybrid EM phenotypes and stemness can be explained by analyzing underlying regulatory circuits. Such experimentally curated models have the potential of serving as platforms for better therapeutic design strategies.
Interpretable Intuitive Physics Model ; Humans have a remarkable ability to use physical commonsense and predict the effect of collisions. But do they understand the underlying factors Can they predict if the underlying factors have changed Interestingly, in most cases humans can predict the effects of similar collisions with different conditions such as changes in mass, friction, etc. It is postulated this is primarily because we learn to model physics with meaningful latent variables. This does not imply we can estimate the precise values of these meaningful variables estimate exact values of mass or friction. Inspired by this observation, we propose an interpretable intuitive physics model where specific dimensions in the bottleneck layers correspond to different physical properties. In order to demonstrate that our system models these underlying physical properties, we train our model on collisions of different shapes cube, cone, cylinder, spheres etc. and test on collisions of unseen combinations of shapes. Furthermore, we demonstrate our model generalizes well even when similar scenes are simulated with different underlying properties.
Induced gravitation in nonlinear field models ; The description of gravitation in the framework of soliton interaction is considered for two nonlinear field models. These models are Born Infeld nonlinear electrodynamics and socalled Born Infeld type scalar field model. The last model can also be called the extremal spacetime film one because of the specific form of the appropriate variational principle. Gravitational interaction is considered in the context of unification for all interactions of material particles. It is shown that longrange interaction of solitons of the models appears as force one and metrical one. The force interaction can be interpreted as electromagnetic one. The metrical interaction can be interpreted as gravitational one.
Modelling local phase of images and textures with applications in phase denoising and phase retrieval ; The Fourier magnitude has been studied extensively, but less effort has been devoted to the Fourier phase, despite its wellestablished importance in image representation. Global phase was shown to be more important for image representation than the magnitude, whereas local phase, exhibited in Gabor filters, has been used for analysis purposes in detecting image contours and edges. Neither global nor local phase has been modelled in closed form, suitable for Bayesian estimation. In this work, we analyze the local phase of textured images and propose a local Markovian model for local phase coefficients. This model is Gaussianmixturebased, learned from the graph representation of images, based on their complex wavelet decomposition. We demonstrate the applicability of the model in restoration of images with noisy local phase and in image retrieval, where we show superior performance to the wellknown hybrid inputoutput HIO method. We also provide a framework for application of the model in a general setup of image processing.
Zooming Network ; Structural information is important in natural language understanding. Although some current neural netbased models have a limited ability to take local syntactic information, they fail to use highlevel and largescale structures of documents. This information is valuable for text understanding since it contains the author's strategy to express information, in building an effective representation and forming appropriate output modes. We propose a neural netbased model, Zooming Network, capable of representing and leveraging text structure of long document and developing its own analyzing rhythm to extract critical information. Generally, ZN consists of an encoding neural net that can build a hierarchical representation of a document, and an interpreting neural model that can read the information at multilevels and issuing labeling actions through a policynet. Our model is trained with a hybrid paradigm of supervised learning distinguishing right and wrong decision and reinforcement learning determining the goodness among multiple right paths. We applied the proposed model to long text sequence labeling tasks, with performance exceeding baseline model biLSTMcrf by 10 F1measure.
Efficient Representation of Topologically Ordered States with Restricted Boltzmann Machines ; Representation by neural networks, in particular by restricted Boltzmann machines RBM, has provided a powerful computational tool to solve quantum manybody problems. An important open question is how to characterize which class of quantum states can be efficiently represented with the RBM. Here, we show that the RBM can efficiently represent a wide class of manybody entangled states with rich exotic topological orders. This includes 1 ground states of double semion and twisted quantum double models with intrinsic topological orders; 2 states of the AKLT model and 2D CZX model with symmetry protected topological order; 3 states of Haah code model with fracton topological order; 4 generalized stabilizer states and hypergraph states that are important for quantum information protocols. One twisted quantum double model state considered here harbors nonabelian anyon excitations. Our result shows that it is possible to study a variety of quantum models with exotic topological orders and rich physics using the RBM computational toolbox.
Adaptive Fraud Detection System Using Dynamic Risk Features ; eCommerce transaction frauds keep changing rapidly. This is the major issue that prevents eCommerce merchants having a robust machine learning model for fraudulent transactions detection. The root cause of this problem is that rapid changing fraud patterns alters underlying data generating system and causes the performance deterioration for machine learning models. This phenomenon in statistical modeling is called Concept Drift. To overcome this issue, we propose an approach which adds dynamic risk features as model inputs. Dynamic risk features are a set of features built on entity profile with fraud feedback. They are introduced to quantify the fluctuation of probability distribution of risk features from certain entity profile caused by concept drift. In this paper, we also illustrate why this strategy can successfully handle the effect of concept drift under statistical learning framework. We also validate our approach on multiple businesses in production and have verified that the proposed dynamic model has a superior ROC curve than a static model built on the same data and training parameters.
Numerical approximation of elliptic problems with lognormal random coefficients ; In this work, we consider a nonstandard preconditioning strategy for the numerical approximation of the classical elliptic equations with lognormal random coefficients. In citeWanmodel, a Wicktype elliptic model was proposed by modeling the random flux through the Wick product. Due to the lowertriangular structure of the uncertainty propagator, this model can be approximated efficiently using the Wiener chaos expansion in the probability space. Such a Wicktype model provides, in general, a secondorder approximation of the classical one in terms of the standard deviation of the underlying Gaussian process. Furthermore, when the correlation length of the underlying Gaussian process goes to infinity, the Wicktype model yields the same solution as the classical one. These observations imply that the Wicktype elliptic equation can provide an effective preconditioner for the classical random elliptic equation under appropriate conditions. We use the Wicktype elliptic model to accelerate the Monte Carlo method and the stochastic Galerkin finite element method. Numerical results are presented and discussed.
The Trajectron Probabilistic MultiAgent Trajectory Modeling With Dynamic Spatiotemporal Graphs ; Developing safe humanrobot interaction systems is a necessary step towards the widespread integration of autonomous agents in society. A key component of such systems is the ability to reason about the many potential futures e.g. trajectories of other agents in the scene. Towards this end, we present the Trajectron, a graphstructured model that predicts many potential future trajectories of multiple agents simultaneously in both highly dynamic and multimodal scenarios i.e. where the number of agents in the scene is timevarying and there are many possible highlydistinct futures for each agent. It combines tools from recurrent sequence modeling and variational deep generative modeling to produce a distribution of future trajectories for each agent in a scene. We demonstrate the performance of our model on several datasets, obtaining stateoftheart results on standard trajectory prediction metrics as well as introducing a new metric for comparing models that output distributions.
Thermodynamics and Phase Transition in ShapereWilczek it fgh model Cosmological Time Crystal in Quadratic Gravity ; The ShapereWilczek model citewil, or so called fgh model, enjoys the remarkable features of a Time Crystal TC that has a nontrivial time dependence in its lowest energy state or the classical ground state. We construct a particular form of fgh model with specified f,g,h functions that is derived from a Minisuperspace version of a quadratic fR,Rmunu gravity theory. Main part of the investigation deals with thermodynamic properties of such systems from classical statistical mechanics perspective. Our analysis reveals the possibility of a it phase transition. Because of the higher time derivative nature of the model computation of the partial function is nontrivial and requires newly discovered techniques. We speculate about possible connection between our model and the Multiverse scenario.
Computer model calibration based on image warping metrics an application for sea ice deformation ; Arctic sea ice plays an important role in the global climate. Sea ice models governed by physical equations have been used to simulate the state of the ice including characteristics such as ice thickness, concentration, and motion. More recent models also attempt to capture features such as fractures or leads in the ice. These simulated features can be partially misaligned or misshapen when compared to observational data, whether due to numerical approximation or incomplete physics. In order to make realistic forecasts and improve understanding of the underlying processes, it is necessary to calibrate the numerical model to field data. Traditional calibration methods based on generalized leastsquare metrics are flawed for linear features such as sea ice cracks. We develop a statistical emulation and calibration framework that accounts for feature misalignment and misshapenness, which involves optimally aligning model output with observed features using cutting edge image registration techniques. This work can also have application to other physical models which produce coherent structures.
The Peculiarities of the Cosmological Models Based on Nonlinear Classical and Phantom Fields with Minimal Interaction. II. The Cosmological Model Based on the Asymmetrical Scalar Doublet ; A detailed comparative qualitative analysis and numerical simulation of evolution of the cosmological models based on the doublet of classical and phantom scalar fields with selfaction. The 2dimensional and 3dimensional projections of the phase portraits of the corresponding dynamic system are built. Just as in the case of single scalar fields, the phase space of such systems, becomes multiply connected, the ranges of negative total effective energy unavailable for motion, getting appear there. The distinctive feature of the asymmetrical scalar doublet is the time dependency of the prohibited ranges' projections on the phase subspaces of each field as a result of which the existence of the limit cycles with null effective energy depends on the parameters of the field model and initial conditions. The numerical models where the dynamic system has limit cycles on hypersurfaces of null energy, are built. It is shown, that even quite weak phantom field in such model undertakes functions of management of the dynamic system and can significantly change the course of the cosmological evolution.
Common lines modeling for reference free abinitio reconstruction in cryoEM ; We consider the problem of estimating an unbiased and referencefree abinito model for nonsymmetric molecules from images generated by singleparticle cryoelectron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained abinitio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in abinitio models with resolutions of 20A or better, even from class averages consisting of as few as three raw images per class.
Regularized Maximum Likelihood Estimation and Feature Selection in MixturesofExperts Models ; Mixture of Experts MoE are successful models for modeling heterogeneous data in many statistical learning problems including regression, clustering and classification. Generally fitted by maximum likelihood estimation via the wellknown EM algorithm, their application to highdimensional problems is still therefore challenging. We consider the problem of fitting and feature selection in MoE models, and propose a regularized maximum likelihood estimation approach that encourages sparse solutions for heterogeneous regression data models with potentially highdimensional predictors. Unlike stateofthe art regularized MLE for MoE, the proposed modelings do not require an approximate of the penalty function. We develop two hybrid EM algorithms an ExpectationMajorizationMaximization EMMM algorithm, and an EM algorithm with coordinate ascent algorithm. The proposed algorithms allow to automatically obtaining sparse solutions without thresholding, and avoid matrix inversion by allowing univariate parameter updates. An experimental study shows the good performance of the algorithms in terms of recovering the actual sparse solutions, parameter estimation, and clustering of heterogeneous regression data.
SemiOnline Bipartite Matching ; In this paper we introduce the emphsemionline model that generalizes the classical online computational model. The semionline model postulates that the unknown future has a predictable part and an adversarial part; these parts can be arbitrarily interleaved. An algorithm in this model operates as in the standard online model, i.e., makes an irrevocable decision at each step. We consider bipartite matching in the semionline model, for both integral and fractional cases. Our main contributions are competitive algorithms for this problem that are close to or match a hardness bound. The competitive ratio of the algorithms nicely interpolates between the truly offline setting no adversarial part and the truly online setting no predictable part.
A Probabilistic Model of Cardiac Physiology and Electrocardiograms ; An electrocardiogram EKG is a common, noninvasive test that measures the electrical activity of a patient's heart. EKGs contain useful diagnostic information about patient health that may be absent from other electronic health record EHR data. As multidimensional waveforms, they could be modeled using generic machine learning tools, such as a linear factor model or a variational autoencoder. We take a different approachwe specify a model that directly represents the underlying electrophysiology of the heart and the EKG measurement process. We apply our model to two datasets, including a sample of emergency department EKG reports with missing data. We show that our model can more accurately reconstruct missing data measured by test reconstruction error than a standard baseline when there is significant missing data. More broadly, this physiological representation of heart function may be useful in a variety of settings, including prediction, causal analysis, and discovery.
Measuring the Stability of EHR and EKGbased Predictive Models ; Databases of electronic health records EHRs are increasingly used to inform clinical decisions. Machine learning methods can find patterns in EHRs that are predictive of future adverse outcomes. However, statistical models may be built upon patterns of healthseeking behavior that vary across patient subpopulations, leading to poor predictive performance when training on one patient population and predicting on another. This note proposes two tests to better measure and understand model generalization. We use these tests to compare models derived from two data sources i historical medical records, and ii electrocardiogram EKG waveforms. In a predictive task, we show that EKGbased models can be more stable than EHRbased models across different patient populations.
Leveraging Multigrained Sentiment Lexicon Information for Neural Sequence Models ; Neural sequence models have achieved great success in sentencelevel sentiment classification. However, some models are exceptionally complex or based on expensive features. Some other models recognize the value of existed linguistic resource but utilize it insufficiently. This paper proposes a novel and general method to incorporate lexicon information, including sentiment lexicons, negation words and intensifiers. Words are annotated in finegrained and coarsegrained labels. The proposed method first encodes the finegrained labels into sentiment embedding and concatenates it with word embedding. Second, the coarsegrained labels are utilized to enhance the attention mechanism to give large weight on sentimentrelated words. Experimental results show that our method can increase classification accuracy for neural sequence models on both SST5 and MR dataset. Specifically, the enhanced BiLSTM model can even compare with a TreeLSTM which uses expensive phraselevel annotations. Further analysis shows that in most cases the lexicon resource can offer the right annotations. Besides, the proposed method is capable of overcoming the effect from inevitably wrong annotations.
Bayesian Analysis of Nonparanormal Graphical Models Using RankLikelihood ; Gaussian graphical models, where it is assumed that the variables of interest jointly follow a multivariate normal distribution with a sparse precision matrix, have been used to study intrinsic dependence among variables, but the normality assumption may be restrictive in many settings. A nonparanormal graphical model is a semiparametric generalization of a Gaussian graphical model for continuous variables where it is assumed that the variables follow a Gaussian graphical model only after some unknown smooth monotone transformation. We consider a Bayesian approach for the nonparanormal graphical model using a ranklikelihood which remains invariant under monotone transformations, thereby avoiding the need to put a prior on the transformation functions. On the underlying precision matrix of the transformed variables, we consider a horseshoe prior on its Cholesky decomposition and use an efficient posterior Gibbs sampling scheme. We present a posterior consistency result for the precision matrix based on the rankbased likelihood. We study the numerical performance of the proposed method through a simulation study and apply it on a real dataset.
Modelling trait dependent speciation with Approximate Bayesian Computation ; Phylogeny is the field of modelling the temporal discrete dynamics of speciation. Complex models can nowadays be studied using the Approximate Bayesian Computation approach which avoids likelihood calculations. The field's progression is hampered by the lack of robust software to estimate the numerous parameters of the speciation process. In this work we present an R package, pcmabc, based on Approximate Bayesian Computations, that implements three novel phylogenetic algorithms for traitdependent speciation modelling. Our phylogenetic comparative methodology takes into account both the simulated traits and phylogeny, attempting to estimate the parameters of the processes generating the phenotype and the trait. The user is not restricted to a predefined set of models and can specify a variety of evolutionary and branching models. We illustrate the software with a simulationreestimation study focused around the branching OrnsteinUhlenbeck process, where the branching rate depends nonlinearly on the value of the driving OrnsteinUhlenbeck process. Included in this work is a tutorial on how to use the software.
Absence of Finite Temperature Phase Transitions in the XCube Model and its mathbbZp Generalization ; We investigate thermal properties of the XCube model and its mathbbZp clocktype' pXCube extension. In the latter, the elementary spin12 operators of the XCube model are replaced by elements of the Weyl algebra. We study different boundary condition realizations of these models and analyze their finite temperature dynamics and thermodynamics. We find that i no finite temperature phase transitions occur in these systems. In tandem, employing bondalgebraic dualities, we show that for Glauber type solvable baths, ii thermal fluctuations might not enable system size dependent time autocorrelations at all positive temperatures i.e., they are thermally fragile. Qualitatively, our results demonstrate that similar to Kitaev's Toric code model, the XCube model and its pstate clocktype descendants may be mapped to simple classical Ising pstate clock chains in which neither phase transitions nor anomalously slow glassy dynamics might appear.
Large Field Ranges from Aligned and Misaligned Winding ; We search for effective axions with superPlanckian decay constants in type IIB string models. We argue that such axions can be realised as long winding trajectories in complexstructure moduli space by an appropriate flux choice. Our main findings are The simplest models with aligned winding in a 2axion field space fail due to a general nogo theorem. However, equally simple models with misaligned winding, where the effective axion is not close to any of the fundamental axions, appear to work to the best of our present understanding. These models have large decay constants but no large monotonic regions in the potential, making them unsuitable for largefield inflation. We also show that our nogo theorem can be avoided by aligning three or more axions. We argue that, contrary to misaligned models, such models can have both large decay constants and large monotonic regions in the potential. Our results may be used to argue against the refined Swampland Distance Conjecture and strong forms of the axionic Weak Gravity Conjecture. It becomes apparent, however, that realising inflation is by far harder than just producing a light field with large periodicity.
Split Regression Modeling ; Sparse methods are the standard approach to obtain interpretable models with high prediction accuracy. Alternatively, algorithmic ensemble methods can achieve higher prediction accuracy at the cost of loss of interpretability. However, the use of blackbox methods has been heavily criticized for highstakes decisions and it has been argued that there does not have to be a tradeoff between accuracy and interpretability. To combine high accuracy with interpretability, we generalize best subset selection to best split selection. Best split selection constructs a small number of sparse models learned jointly from the data which are then combined in an ensemble. Best split selection determines the models by splitting the available predictor variables among the different models when fitting the data. The proposed methodology results in an ensemble of sparse and diverse models that each provide a possible explanation for the relationship between the predictors and the response. The high computational cost of best split selection motivates the need for computational tractable approximations. We evaluate a method developed by Christidis et al. 2020 which can be seen as a multiconvex relaxation of best split selection.
An Empirical Model of LargeBatch Training ; In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easytomeasure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets MNIST, SVHN, CIFAR10, ImageNet, Billion Word, reinforcement learning domains Atari and Dota, and even generative model training autoencoders on SVHN. We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empiricallymotivated theory also describes the tradeoff between computeefficiency and timeefficiency, and provides a rough model of the benefits of adaptive batchsize training.
Face Hallucination Revisited An Exploratory Study on Dataset Bias ; Contemporary face hallucination FH models exhibit considerable ability to reconstruct highresolution HR details from lowresolution LR face images. This ability is commonly learned from examples of corresponding HRLR image pairs, created by artificially downsampling the HR ground truth data. This downsampling or degradation procedure not only defines the characteristics of the LR training data, but also determines the type of image degradations the learned FH models are eventually able to handle. If the image characteristics encountered with realworld LR images differ from the ones seen during training, FH models are still expected to perform well, but in practice may not produce the desired results. In this paper we study this problem and explore the bias introduced into FH models by the characteristics of the training data. We systematically analyze the generalization capabilities of several FH models in various scenarios, where the image the degradation function does not match the training setup and conduct experiments with synthetically downgraded as well as reallife lowquality images. We make several interesting findings that provide insight into existing problems with FH models and point to future research directions.
Inference and Sampling of K33free Ising Models ; We call an Ising model tractable when it is possible to compute its partition function value statistical inference in polynomial time. The tractability also implies an ability to sample configurations of this model in polynomial time. The notion of tractability extends the basic case of planar zerofield Ising models. Our starting point is to describe algorithms for the basic case computing partition function and sampling efficiently. To derive the algorithms, we use an equivalent linear transition to perfect matching counting and sampling on an expanded dual graph. Then, we extend our tractable inference and sampling algorithms to models, whose triconnected components are either planar or graphs of O1 size. In particular, it results in a polynomialtime inference and sampling algorithms for K33 minor free topologies of zerofield Ising models a generalization of planar graphs with a potentially unbounded genus.
A game model for the multimodality phenomena of coauthorship networks ; We provided a game model to simulate the evolution of coauthorship networks, a geometric hypergraph built on a circle. The model expresses kin selection and network reciprocity, two typically cooperative mechanisms, through a cooperation condition called positive benefitminuscost. The costs are modelled through space distances. The benefits are modelled through geometric zones that depend on node hyperdegree, which gives an expression of the cumulative advantage on the reputations of authors. Our findings indicate that the model gives a reasonable fitting to empirical coauthorship networks on their degree distribution, node clustering, and so on. It reveals two properties of node attractions, namely node heterogeneity and fading with the growth of hyperdegrees, can deduce the dichotomy of nodes' clustering behavior and assortativity, as well as the trichotomy of degree and hyperdegree distributions generalized Poisson, power law and exponential cutoff.
Lattice and Continuum Models Analysis of the Aggregation Diffusion Cell Movement ; The process by which one may take a discrete model of a biophysical process and construct a continuous model based on it is of mathematical interest as well as being of practical use. In this paper, we first study the singular limit of a class of reinforced random walks on a lattice for which a complete analysis of the existence and stability of solutions are possible. In the continuous scenario, we obtain the regularity estimate of this aggregation diffusion model. As a byproduct, nonexistence of solution of the continuous model with pure aggregation initial data is proved. When the initial is purely in diffusion region, asymptotic behavior of the solution is obtained. In contrast to continuous model, boundedness of the lattice solution, asymptotic behavior of solution in diffusion region with monotone initial date and the interface behaviors of the aggregation, diffusion regions are obtained. Finally we discuss the asymptotic behaviors of the solution under more general initial data with nonflux when the lattice points Nleq 4.
Variational Quantum Circuit Model for Knowledge Graphs Embedding ; In this work, we propose the first quantum Ansatze for the statistical relational learning on knowledge graphs using parametric quantum circuits. We introduce two types of variational quantum circuits for knowledge graph embedding. Inspired by the classical representation learning, we first consider latent features for entities as coefficients of quantum states, while predicates are characterized by parametric gates acting on the quantum states. For the first model, the quantum advantages disappear when it comes to the optimization of this model. Therefore, we introduce a second quantum circuit model where embeddings of entities are generated from parameterized quantum gates acting on the pure quantum state. The benefit of the second method is that the quantum embeddings can be trained efficiently meanwhile preserving the quantum advantages. We show the proposed methods can achieve comparable results to the stateoftheart classical models, e.g., RESCAL, DistMult. Furthermore, after optimizing the models, the complexity of inductive inference on the knowledge graphs might be reduced with respect to the number of entities.
GRP Model for Sensorimotor Learning ; Learning from complex demonstrations is challenging, especially when the demonstration consists of different strategies. A popular approach is to use a deep neural network to perform imitation learning. However, the structure of that deep neural network has to be deep enough to capture all possible scenarios. Besides the machine learning issue, how humans learn in the sense of physiology has rarely been addressed and relevant works on spinal cord learning are rarer. In this work, we develop a novel modular learning architecture, the Generator and Responsibility Predictor GRP model, which automatically learns the subtask policies from an unsegmented controller demonstration and learns to switch between the policies. We also introduce a more physiological based neural network architecture. We implemented our GRP model and our proposed neural network to form a model the transfers the swing leg control from the brain to the spinal cord. Our result suggests that by using the GRP model the brain can successfully transfer the target swing leg control to the spinal cord and the resulting model can switch between subcontrol policies automatically.
Examples of symmetrypreserving truncations in tensor field theory ; We consider the tensor formulation of the nonlinear O2 sigma model and its gauged version the compact Abelian Higgs model, on a Ddimensional cubic lattice, and show that tensorial truncations are compatible with the general identities derived from the symmetries of these models. This means that the universal properties of these models can be reproduced with highly simplified formulations desirable for implementations with quantum computers or for quantum simulations experiments. We briefly discuss the extensions to nonAbelian symmetries and models with fermions.
Egocentric Bias and Doubt in Cognitive Agents ; Modeling social interactions based on individual behavior has always been an area of interest, but prior literature generally presumes rational behavior. Thus, such models may miss out on capturing the effects of biases humans are susceptible to. This work presents a method to model egocentric bias, the reallife tendency to emphasize one's own opinion heavily when presented with multiple opinions. We use a symmetric distribution centered at an agent's own opinion, as opposed to the Bounded Confidence BC model used in prior work. We consider a game of iterated interactions where an agent cooperates based on its opinion about an opponent. Our model also includes the concept of domainbased selfdoubt, which varies as the interaction succeeds or not. An increase in doubt makes an agent reduce its egocentricity in subsequent interactions, thus enabling the agent to learn reactively. The agent system is modeled with factions not having a single leader, to overcome some of the issues associated with leaderfollower factions. We find that agents belonging to factions perform better than individual agents. We observe that an intermediate level of egocentricity helps the agent perform at its best, which concurs with conventional wisdom that neither overconfidence nor low selfesteem brings benefits.
Belga Btrees ; We revisit selfadjusting external memory tree data structures, which combine the optimal and practical worstcase IO performances of Btrees, while adapting to the online distribution of queries. Our approach is analogous to undergoing efforts in the BST model, where Tango Trees Demaine et al. 2007 were shown to be Ologlog Ncompetitive with the runtime of the best offline binary search tree on every sequence of searches. Here we formalize the BTree model as a natural generalization of the BST model. We prove lower bounds for the BTree model, and introduce a BTree model data structure, the Belga Btree, that executes any sequence of searches within a Olog log N factor of the best offline Btree model algorithm, provided BlogO1N. We also show how to transform any static BST into a static Btree which is faster by a Thetalog B factor; the transformation is randomized and we show that randomization is necessary to obtain any significant speedup.
3IRT A New Item Response Model and its Applications ; Item Response Theory IRT aims to assess latent abilities of respondents based on the correctness of their answers in aptitude test items with different difficulty levels. In this paper, we propose the beta3IRT model, which models continuous responses and can generate a much enriched family of Item Characteristic Curve ICC. In experiments we applied the proposed model to data from an online exam platform, and show our model outperforms a more standard 2PLND model on all datasets. Furthermore, we show how to apply beta3IRT to assess the ability of machine learning classifiers. This novel application results in a new metric for evaluating the quality of the classifier's probability estimates, based on the inferred difficulty and discrimination of data instances.
From Hotelling to Load Balancing Approximation and the Principle of Minimum Differentiation ; Competing firms tend to select similar locations for their stores. This phenomenon, called the principle of minimum differentiation, was captured by Hotelling with a landmark model of spatial competition but is still the object of an ongoing scientific debate. Although consistently observed in practice, many more realistic variants of Hotelling's model fail to support minimum differentiation or do not have pure equilibria at all. In particular, it was recently proven for a generalized model which incorporates negative network externalities and which contains Hotelling's model and classical selfish load balancing as special cases, that the unique equilibria do not adhere to minimum differentiation. Furthermore, it was shown that for a significant parameter range pure equilibria do not exist. We derive a sharp contrast to these previous results by investigating Hotelling's model with negative network externalities from an entirely new angle approximate pure subgame perfect equilibria. This approach allows us to prove analytically and via agentbased simulations that approximate equilibria having good approximation guarantees and that adhere to minimum differentiation exist for the full parameter range of the model. Moreover, we show that the obtained approximate equilibria have high social welfare.
Effects of Compton scattering on the neutron star radius constraints in rotationpowered millisecond pulsars ; The aim of this work is to study the possible effects and biases on the radius constraints for rotationpowered millisecond pulsars when using Thomson approximation to describe electron scattering in the atmosphere models, instead of using exact formulation for Compton scattering. We compare the differences between the two models in the energy spectrum and angular distribution of the emitted radiation. We also analyse a selfgenerated synthetic phaseresolved energy spectrum, based on Compton atmosphere and the most Xray luminous rotationpowered millisecond pulsars observed by the Neutron star Interior Composition ExploreR NICER. We derive constraints for the neutron star parameters using both the Compton and Thomson models. The results show that the method works by reproducing the correct parameters with the Compton model. However, biases are found in size and the temperature of the emitting hot spot, when using the Thomson model. The constraints on the radius are still not significantly changed, and therefore the Thomson model seems to be adequate if we are interested only in the radius measurements using NICER.
A combined first and second order model for a junction with ramp buffer ; Second order macroscopic traffic flow models are able to reproduce the socalled capacity drop effect, i.e., the phenomenon that the outflow of a congested region is substantially lower than the maximum achievable flow. Within this work, we propose a first order model for a junction with ramp buffer that is solely modified at the intersection so that the capacity drop is captured. Theoretical investigations motivate the new choice of coupling conditions and illustrate the difference to purely first and second order models. The numerical example considering the optimal control of the onramp merging into a main road highlights that the combined model generates similar results as the second order model.
FewShot LearningBased Human Activity Recognition ; Fewshot learning is a technique to learn a model with a very small amount of labeled training data by transferring knowledge from relevant tasks. In this paper, we propose a fewshot learning method for wearable sensor based human activity recognition, a technique that seeks highlevel human activity knowledge from lowlevel sensor inputs. Due to the high costs to obtain human generated activity data and the ubiquitous similarities between activity modes, it can be more efficient to borrow information from existing activity recognition models than to collect more data to train a new model from scratch when only a few data are available for model training. The proposed fewshot human activity recognition method leverages a deep learning model for feature extraction and classification while knowledge transfer is performed in the manner of model parameter transfer. In order to alleviate negative transfer, we propose a metric to measure crossdomain classwise relevance so that knowledge of higher relevance is assigned larger weights during knowledge transfer. Promising results in extensive experiments show the advantages of the proposed approach.
Experimental and numerical investigation of photoacoustic resonator for solid samples ; The photoacoustic signal in a closed Tcell resonator is generated and measured using laser based photoacoustic spectroscopy. The signal is modelled using the amplitude mode expansion method, which is based on eigenmode expansion and introduction of losses in form of loss factors. The measurement reproduced almost all the calculated resonances from the numerical models with fairly good agreement. The cause of the differences between the measured and the simulated resonances are explained. In addition, the amplitude mode expansion simulation model is established as a quicker and computationally less demanding photoacoustic simulation alternative to the viscothermal model. The resonance frequencies obtained from the two models deviate by less than 1.8. It was noted that the relative height of the amplitudes of the two models depended on the location of the antinodes within the resonator.
Proton decay at 1loop ; Proton decay is usually discussed in the context of grand unified theories. However, as is wellknown, in the standard model effective theory proton decay appears in the form of higher dimensional nonrenormalizable operators. Here, we study systematically the 1loop decomposition of the d6 BL violating operators. We exhaustively list the possible 1loop ultraviolet completions of these operators and discuss that, in general, two distinct classes of models appear. Models in the first class need an additional symmetry in order to avoid treelevel proton decay. These models necessarily contain a neutral particle, which could act as a dark matter candidate. For models in the second class the loop contribution dominates automatically over the treelevel proton decay, without the need for additional symmetries. We also discuss possible phenomenology of two example models, one from each class, and their possible connections to neutrino masses, LHC searches and dark matter.
An Unsupervised Autoregressive Model for Speech Representation Learning ; This paper proposes a novel unsupervised autoregressive neural model for learning generic speech representations. In contrast to other speech representation learning methods that aim to remove noise or speaker variabilities, ours is designed to preserve information for a wide range of downstream tasks. In addition, the proposed model does not require any phonetic or word boundary labels, allowing the model to benefit from large quantities of unlabeled data. Speech representations learned by our model significantly improve performance on both phone classification and speaker verification over the surface features and other supervised and unsupervised approaches. Further analysis shows that different levels of speech information are captured by our model at different layers. In particular, the lower layers tend to be more discriminative for speakers, while the upper layers provide more phonetic content.
Publicly Available Clinical BERT Embeddings ; Contextual word embedding models such as ELMo Peters et al., 2018 and BERT Devlin et al., 2018 have dramatically improved performance for many natural language processing NLP tasks in recent months. However, these models have been minimally explored on specialty corpora, such as clinical text; moreover, in the clinical domain, no publiclyavailable pretrained BERT models yet exist. In this work, we address this need by exploring and releasing BERT models for clinical text one for generic clinical text and another for discharge summaries specifically. We demonstrate that using a domainspecific model yields performance improvements on three common clinical NLP tasks as compared to nonspecific embeddings. These domainspecific models are not as performant on two clinical deidentification tasks, and argue that this is a natural consequence of the differences between deidentified source text and synthetically non deidentified task text.
Bayesian influence diagnostics using normalizing functional Bregman divergence ; Ideally, any statistical inference should be robust to local influences. Although there are simple ways to check about leverage points in independent and linear problems, more complex models require more sophisticated methods. KullbackLeiber and Bregman divergences were already applied in Bayesian inference to measure the isolated impact of each observation in a model. We extend these ideas to models for dependent data and with nonnormal probability distributions such as time series, spatial models and generalized linear models. We also propose a strategy to rescale the functional Bregman divergence to lie in the 0,1 interval thus facilitating interpretation and comparison. This is accomplished with a minimal computational effort and maintaining all theoretical properties. For computational efficiency, we take advantage of Hamiltonian Monte Carlo methods to draw samples from the posterior distribution of model parameters. The resulting Markov chains are then directly connected with Bregman calculus, which results in fast computation. We check the propositions in both simulated and empirical studies.
Phenomenology of SUGRA Extensions of the Starobisnky Model ; We analyze BI model in a complete form and compare the predictions with that of Starobinsky model. Under the parameter constraints in Planck 2018, we find that the dynamics of the whole inflation process described by BI and Starobinsky models are nearly the same, even though there are some differences in the regions out of inflation. We also find the scales of parameters in BI model and initial inflaton values required to implement inflation. The changes of left ns, r right fingerprints of BI model and that of evolutions of inflaton field due to the variations of relevant parameters are also investigated.
Membership Inference Attacks on SequencetoSequence Models Is My Data In Your Machine Translation System ; Data privacy is an important issue for machine learning as a service providers. We focus on the problem of membership inference attacks given a data sample and blackbox access to a model's API, determine whether the sample existed in the model's training data. Our contribution is an investigation of this problem in the context of sequencetosequence models, which are important in applications such as machine translation and video captioning. We define the membership inference problem for sequence generation, provide an open dataset based on stateoftheart machine translation models, and report initial results on whether these models leak private information against several kinds of membership inference attacks.
On Ground States and Phase Transition for Model with the Competing Potts Interactions on Cayley Trees ; In this paper, we consider the lambdamodel with nearest neighbor interactions and with competing Potts interactions on the Cayley tree of ordertwo. We notice that if lambdafunction is taken as a Potts interaction function, then this model contains as a particular case of Potts model with competing interactions on Cayley tree. In this paper, we first describe all ground states of the model. We point out that the Potts model with considered interactions was investigated only numerically, without rigorous mathematical proofs. One of the main points of this paper is to propose a measuretheoretical approach for the considered model in more general setting. Furthermore, we find certain conditions for the existence of Gibbs measures corresponding to the model, which allowed to establish the existence of the phase transition.
Curious iLQR Resolving Uncertainty in Modelbased RL ; Curiosity as a means to explore during reinforcement learning problems has recently become very popular. However, very little progress has been made in utilizing curiosity for learning control. In this work, we propose a modelbased reinforcement learning MBRL framework that combines Bayesian modeling of the system dynamics with curious iLQR, an iterative LQR approach that considers model uncertainty. During trajectory optimization the curious iLQR attempts to minimize both the taskdependent cost and the uncertainty in the dynamics model. We demonstrate the approach on reaching tasks with 7DoF manipulators in simulation and on a real robot. Our experiments show that MBRL with curious iLQR reaches desired endeffector targets more reliably and with less system rollouts when learning a new task from scratch, and that the learned model generalizes better to new reaching tasks.
Exotic coloured fermions and lepton number violation at the LHC ; Majorana neutrino mass models with a scale of lepton number violation LNV of order TeV potentially lead to signals at the LHC. Here, we consider an extension of the standard model with a coloured octet fermion and a scalar leptoquark. This model generates neutrino masses at 2loop order. We make a detailed MonteCarlo study of the LNV signal at the LHC in this model, including a simulation of standard model backgrounds. Our forecast predicts that the LHC with 300fb should be able to probe this model up to colour octet fermion masses in the range of 2.62.7 TeV, depending on the lepton flavour of the final state.
Light Curve Parameters of Cepheid and RR Lyrae Variables at Multiple Wavelengths Models vs. Observations ; We present results from a comparative study of light curves of Cepheid and RR Lyrae stars in the Galaxy and the Magellanic Clouds with their theoretical models generated from the stellar pulsation codes. Fourier decomposition method is used to analyse the theoretical and the observed light curves at multiple wavelengths. In case of RR Lyrae stars, the amplitude and Fourier parameters from the models are consistent with observations in most period bins except for low metalabundances Z0.004. In case of Cepheid variables, we observe a greater offset between models and observations for both the amplitude and Fourier parameters. The theoretical amplitude parameters are typically larger than those from observations, except close to the period of 10 days. We find that these discrepancies between models and observations can be reduced if a higher convective efficiency is adopted in the pulsation codes. Our results suggest that a quantitative comparison of light curve structure is very useful to provide constraints for the input physics to the stellar pulsation models.
FLARe Forecasting by Learning Anticipated Representations ; Computational models that forecast the progression of Alzheimer's disease at the patient level are extremely useful tools for identifying high risk cohorts for early intervention and treatment planning. The stateoftheart work in this area proposes models that forecast by using latent representations extracted from the longitudinal data across multiple modalities, including volumetric information extracted from medical scans and demographic info. These models incorporate the time horizon, which is the amount of time between the last recorded visit and the future visit, by directly concatenating a representation of it to the data latent representation. In this paper, we present a model which generates a sequence of latent representations of the patient status across the time horizon, providing more informative modeling of the temporal relationships between the patient's history and future visits. Our proposed model outperforms the baseline in terms of forecasting accuracy and F1 score with the added benefit of robustly handling missing visits.
Looking Beyond Label Noise Shifted Label Distribution Matters in Distantly Supervised Relation Extraction ; In recent years there is a surge of interest in applying distant supervision DS to automatically generate training data for relation extraction RE. In this paper, we study the problem what limits the performance of DStrained neural models, conduct thorough analyses, and identify a factor that can influence the performance greatly, shifted label distribution. Specifically, we found this problem commonly exists in realworld DS datasets, and without special handing, typical DSRE models cannot automatically adapt to this shift, thus achieving deteriorated performance. To further validate our intuition, we develop a simple yet effective adaptation method for DStrained models, bias adjustment, which updates models learned over the source domain i.e., DS training set with a label distribution estimated on the target domain i.e., test set. Experiments demonstrate that bias adjustment achieves consistent performance gains on DStrained models, especially on neural models, with an up to 23 relative F1 improvement, which verifies our assumptions. Our code and data can be found at urlhttpsgithub.comINKUSCshiftedlabeldistribution.
Migration patterns under different scenarios of sea level rise ; We propose a framework to examine future migration patterns of people under different sea level rise scenarios using models of human migration. Specifically, we couple a sea level rise model with a datadriven model of human migration, creating a generalized joint model of climate driven migration that can be used to simulate population distributions under potential future sea level rise scenarios. We show how this joint model relaxes assumptions in existing efforts to model climate driven human migration, and use it to simulate how migration, driven by sea level rise, differs from baseline migration patterns. Our results show that the effects of sea level rise are pervasive, expanding beyond coastal areas via increased migration, and disproportionately affecting some areas of the United States. The code for reproducing this study is available at httpsgithub.comcalebrob6migrationslr.
NewGeneration DesignTechnology CoOptimization DTCO MachineLearning Assisted Modeling Framework ; In this paper, we propose a machinelearning assisted modeling framework in designtechnology cooptimization DTCO flow. Neural network NN based surrogate model is used as an alternative of compact model of new devices without prior knowledge of device physics to predict device and circuit electrical characteristics. This modeling framework is demonstrated and verified in FinFET with high predicted accuracy in device and circuit level. Details about the data handling and prediction results are discussed. Moreover, same framework is applied to new mechanism device tunnel FET TFET to predict device and circuit characteristics. This work provides new modeling method for DTCO flow.
A model independent parametrization of the late time cosmic acceleration constraints on the parameters from recent observations ; In this work, we have considered a model independent approach to study the nature of the late time cosmic acceleration. We have used the Pade approximation to parametrize the comoving distance. Consequently, from this comoving distance, we derive a parameterization for the Hubble parameter. Our parametrization is completely analytic and valid for latetime and matter dominated eras only. This parametrization possesses subpercentage accuracy compared to any arbitrary cosmological model or parametrization up to matter dominated era. Using this parametrization, we put constraints on the parameters from recent low redshift cosmological observations including Planck 2018 distance priors. Our results show that the LambdaCDM model is 1sigma to 2sigma away for lower redshifts. We find that the phantom crossing is allowed by all the combinations of dataset considered. We also find that the dynamical dark energy models are preferable at lower redshifts. Our study also shows that, at lower redshifts z0.5, phantom models are allowed at almost 1sigma confidence level.
Modelling the radiation pattern of a dual circular polarization system ; We present the electromagnetic model of a dual circular polarization antennafeed system, consisting of a corrugated feedhorn, a polarizer and an orthomode transducer. This model was developed for the passive frontend implemented in the Qband receivers of the STRIP instrument of the Large Scale Polarization Explorer experiment. Its applicability, however, is completely general. The model has been implemented by superposing the response of two linearly polarized feedhorns with a pi2 phase difference, thus taking into account for the effect of the polarizer, that behaves differently for the two polarizations of the incoming electric field. The model has been verified by means of radiation pattern measurements performed in the anechoic chamber at the Physics Department of the University of Milan. We measured both arms of the orthomode transducer, in order to check that the diagram at one port is the 90 rotation of the diagram at the other port. Simulations and measurements show an agreement at the level of fraction of a dB up to the first sidelobes, thus confirming the model.
Modeling and Simulation of Practical Quantum Secure Communication Network ; As the Quantum Key Distribution QKD technology supporting the pointtopoint application matures, the need to build the Quantum Secure Communication Network QSCN to guarantee the security of a large scale of nodes becomes urgent. Considering the project time and expense control, it is the first choice to build the QSCN based on an existing classical network. Suitable modeling and simulation are very important to construct a QSCN successfully and efficiently. In this paper, a practical QSCN model, which can reflect the network state well, is proposed. The model considers the volatile traffic demand of the classical network and the real key generation capability of the QKD devices, which can enhance the accuracy of simulation to a great extent. In addition, two unique QSCN performance indicators, ITS informationtheoretic secure communication capability and ITS communication efficiency, are proposed in the model, which are necessary supplements for the evaluation of a QSCN except for those traditional performance indicators of classical networks. Finally, the accuracy of the proposed QSCN model and the necessity of the proposed performance indicators are verified by plentiful simulations results.
A pressure field model for fast, robust approximation of net contact force and moment between nominally rigid objects ; We introduce an approximate model for predicting the net contact wrench between nominally rigid objects for use in simulation, control, and state estimation. The model combines and generalizes two ideas a bed of springs an elastic foundation and hydrostatic pressure. In this model, continuous pressure fields are computed offline for the interior of each nominally rigid object. Unlike hydrostatics or elastic foundations, the pressure fields need not satisfy mechanical equilibrium conditions. When two objects nominally overlap, a contact surface is defined where the two pressure fields are equal. This static pressure is supplemented with a dissipative ratedependent pressure and friction to determine tractions on the contact surface. The contact wrench between pairs of objects is an integral of traction contributions over this surface. The model evaluates much faster than elasticitytheory models, while showing the essential trends of force, moment, and stiffness increase with contact load. It yields continuous wrenches even for nonconvex objects and coarse meshes. The method shows promise as sufficiently fast, accurate, and robust for designinsimulation of robot controllers.
Improved Conditional VRNNs for Video Prediction ; Predicting future frames for a video sequence is a challenging generative modeling task. Promising approaches include probabilistic latent variable models such as the Variational AutoEncoder. While VAEs can handle uncertainty and model multiple possible future outcomes, they have a tendency to produce blurry predictions. In this work we argue that this is a sign of underfitting. To address this issue, we propose to increase the expressiveness of the latent distributions and to use higher capacity likelihood models. Our approach relies on a hierarchy of latent variables, which defines a family of flexible prior and posterior distributions in order to better model the probability of future sequences. We validate our proposal through a series of ablation experiments and compare our approach to current stateoftheart latent variable models. Our method performs favorably under several metrics in three different datasets.
Minimal signatures of the Standard Model in nonGaussianities ; We show that the leading coupling between a shift symmetric inflaton and the Standard Model fermions leads to an induced electroweak symmetry breaking due to particle production during inflation, and as a result, a unique oscillating feature in nonGaussianities. In this one parameter model, the enhanced production of Standard Model fermions dynamically generates a new electroweak symmetry breaking minimum, where the Higgs field classically rolls into. The production of fermions stops when the Higgs expectation value and hence the fermion masses become too large, suppressing fermion production. The balance between the abovementioned effects gives the Standard Model fermions masses that are uniquely determined by their couplings to the inflaton. In particular, the heaviest Standard Model fermion, the top quark, can produce a distinct cosmological collider physics signature characterised by a onetoone relation between amplitude and frequency of the oscillating signal, which is observable at future 21cm surveys.
3D Virtual Garment Modeling from RGB Images ; We present a novel approach that constructs 3D virtual garment models from photos. Unlike previous methods that require photos of a garment on a human model or a mannequin, our approach can work with various states of the garment on a model, on a mannequin, or on a flat surface. To construct a complete 3D virtual model, our approach only requires two images as input, one front view and one back view. We first apply a multitask learning network called JFNet that jointly predicts fashion landmarks and parses a garment image into semantic parts. The predicted landmarks are used for estimating sizing information of the garment. Then, a template garment mesh is deformed based on the sizing information to generate the final 3D model. The semantic parts are utilized for extracting color textures from input images. The results of our approach can be used in various Virtual Reality and Mixed Reality applications.
The interest rate for saving as a possibilistic risk ; In the paper there is studied an optimal saving model in which the interestrate risk for saving is a fuzzy number. The total utility of consumption is defined by using a concept of possibilistic expected utility. A notion of possibilistic precautionary saving is introduced as a measure of the variation of optimal saving level when moving from a sure saving model to a possibilistic risk model. A first result establishes a necessary and sufficient condition that the presence of a possibilistic interestrate risk generates an extrasaving. This result can be seen as a possibilistic version of a Rothschilld and Stiglitz theorem on a probabilistic model of saving. A second result of the paper studies the variation of the optimal saving level when moving from a probabilistic model the interestrate risk is a random variable to a possibilistic model the interestrate risk is a fuzzy number.
Analytical estimates of secular frequencies for binary star systems ; Binary and multiple star systems are extreme environments for the formation and longterm presence of extrasolar planets. Circumstellar planets are subject to gravitational perturbations from the distant companion star, and this interaction leads to a longperiod precession of their orbits. We investigate analytical models that allow to quantify these perturbations and calculate the secular precession frequency in the dynamical model of the restricted threebody problem. These models are applied to test cases and we discuss some of their shortcomings. In addition, we introduce a modified LaplaceLagrange model which allows to obtain better frequency estimates than the traditional model for large eccentricities of the perturber. We then generalize this model to any number of perturbers, and present an application to the fourbody problem.
Speculative Execution for Guided Visual Analytics ; We propose the concept of Speculative Execution for Visual Analytics and discuss its effectiveness for model exploration and optimization. Speculative Execution enables the automatic generation of alternative, competing model configurations that do not alter the current model state unless explicitly confirmed by the user. These alternatives are computed based on either user interactions or model quality measures and can be explored using deltavisualizations. By automatically proposing modeling alternatives, systems employing Speculative Execution can shorten the gap between users and models, reduce the confirmation bias and speed up optimization processes. In this paper, we have assembled five application scenarios showcasing the potential of Speculative Execution, as well as a potential for further research.
Law of the Iterated Logarithm and Model Selection Consistency for GLMs with Independent and Dependent Responses ; We study the law of the iterated logarithm LIL for the maximum likelihood estimation of the parameters as a convex optimization problem in the generalized linear models with independent or weakly dependent rhomixing, mdependent responses under mild conditions. The LIL is useful to derive the asymptotic bounds for the discrepancy between the empirical process of the loglikelihood function and the true loglikelihood. As the application of the LIL, the strong consistency of some penalized likelihood based model selection criteria can be shown. Under some regularity conditions, the model selection criterion will be helpful to select the simplest correct model almost surely when the penalty term increases with model dimension and the penalty term has an order higher than Ormloglogn but lower than On. Simulation studies are implemented to verify the selection consistency of BIC.
Dynamical Derivation of the Momentum Space Shell Structure for Quarkyonic Matter ; The phase space structure of zero temperature Quarkyonic Matter is a Fermi sphere of Quark Matter, surrounded by a shell of Nucleonic Matter. We construct a quasi particle model of Quarkyonic Matter based on the constituent quark model, where the quark and nucleon masses are related by mQ mNNc, and Nc is the number of quark colors. The region of occupied states is for quarks kQ kFNc, and for nucleons kF kN kF Delta. We first consider the general problem of Quarkyonic Matter with hard core nucleon interactions. We then specialize to a quasiparticle model where the hard core nucleon interactions are accounted for by an excluded volume. In this model, we show that the nucleonic shell forms past some critical density related to the hard core size, and for large densities becomes a thin shell. We explore the basic features of such a model, and argue this model has the semiquantitative behaviour needed to describe neutron stars.
Harmonized Multimodal Learning with Gaussian Process Latent Variable Models ; Multimodal learning aims to discover the relationship between multiple modalities. It has become an important research topic due to extensive multimodal applications such as crossmodal retrieval. This paper attempts to address the modality heterogeneity problem based on Gaussian process latent variable models GPLVMs to represent multimodal data in a common space. Previous multimodal GPLVM extensions generally adopt individual learning schemes on latent representations and kernel hyperparameters, which ignore their intrinsic relationship. To exploit strong complementarity among different modalities and GPLVM components, we develop a novel learning scheme called Harmonization, where latent model parameters are jointly learned from each other. Beyond the correlation fitting or intramodal structure preservation paradigms widely used in existing studies, the harmonization is derived in a modeldriven manner to encourage the agreement between modalityspecific GP kernels and the similarity of latent representations. We present a range of multimodal learning models by incorporating the harmonization mechanism into several representative GPLVMbased approaches. Experimental results on four benchmark datasets show that the proposed models outperform the strong baselines for crossmodal retrieval tasks, and that the harmonized multimodal learning method is superior in discovering semantically consistent latent representation.
FlowDelta Modeling Flow Information Gain in Reasoning for Conversational Machine Comprehension ; Conversational machine comprehension requires deep understanding of the dialogue flow, and the prior work proposed FlowQA to implicitly model the context representations in reasoning for better understanding. This paper proposes to explicitly model the information gain through dialogue reasoning in order to allow the model to focus on more informative cues. The proposed model achieves stateoftheart performance in a conversational QA dataset QuAC and sequential instruction understanding dataset SCONE, which shows the effectiveness of the proposed mechanism and demonstrates its capability of generalization to different QA models and tasks.
Interacting Agegraphic Dark Energy Model in DGP Braneworld Cosmology Dynamical System Approach ; A proposal to study the effect of interaction in an agegraphic dark energy model in DGP braneworld cosmology is presented in this manuscript. After explaining the details, we proceed to apply the dynamical system approach to the model to analyze its stability. We first, constrain model parameters with a variety of independent observational data such as cosmic microwave background anisotropies, baryon acoustic oscillation peaks and observational Hubble data. Then, we obtain the critical points related to different cosmological epochs. In particular, we conclude that in the presence of interaction, dark energy dominated era could be a stable point if model parameters n and beta, obey a given constraint. Also, big rip singularity is avoidable in this model.
360Degree Textures of People in Clothing from a Single Image ; In this paper we predict a full 3D avatar of a person from a single image. We infer texture and geometry in the UVspace of the SMPL model using an imagetoimage translation method. Given partial texture and segmentation layout maps derived from the input view, our model predicts the complete segmentation map, the complete texture map, and a displacement map. The predicted maps can be applied to the SMPL model in order to naturally generalize to novel poses, shapes, and even new clothing. In order to learn our model in a common UVspace, we nonrigidly register the SMPL model to thousands of 3D scans, effectively encoding textures and geometries as images in correspondence. This turns a difficult 3D inference task into a simpler imagetoimage translation one. Results on rendered scans of people and images from the DeepFashion dataset demonstrate that our method can reconstruct plausible 3D avatars from a single image. We further use our model to digitally change pose, shape, swap garments between people and edit clothing. To encourage research in this direction we will make the source code available for research purpose.