text
stringlengths
62
2.94k
Forecasting escooter substitution of direct and access trips by mode and distance ; An escooter trip model is estimated from four U.S. cities Portland, Austin, Chicago and New York City. A loglog regression model is estimated for escooter trips based on user age, population, land area, and the number of scooters. The model predicts 75K daily escooter trips in Manhattan for a deployment of 2000 scooters, which translates to 77 million USD in annual revenue. We propose a novel nonlinear, multifactor model to break down the number of daily trips by the alternative modes of transportation that they would likely substitute based on statistical similarity. The model parameters reveal a relationship with direct trips of bike, walk, carpool, automobile and taxi as well as accessegress trips with public transit in Manhattan. Our model estimates that escooters could replace 32 of carpool; 13 of bike; and 7.2 of taxi trips. The distance structure of revenue from accessegress trips is found to differ from that of other substituted trips.
Canonical Scalar Field Inflation with a WoodsSaxon Potential ; This paper focuses on the realization of an inflationary model from a canonical scalar field theory with a WoodsSaxon potential, in the slowroll approximation. Our analysis indicates that the observable quantities derived theoretically from our model, namely the spectral index of the primordial scalar curvature perturbations and the tensortoscalar ratio, are compatible with the latest Planck collaboration data. We also discuss the qualitative features of the potential, and we show that the value of the scalar field for which the graceful exit occurs, coincides with the inflection point of the scalar potential. We also attempt to study the postinflation reheating phase of the model, in order to further examine the viability of the WoodsSaxon scalar field model, and as we demonstrate the results indicate viability of the model for this era too, however the instantaneous reheating is not allowed for the model at hand.
UncertaintyAware Anticipation of Activities ; Anticipating future activities in video is a task with many practical applications. While earlier approaches are limited to just a few seconds in the future, the prediction time horizon has just recently been extended to several minutes in the future. However, as increasing the predicted time horizon, the future becomes more uncertain and models that generate a single prediction fail at capturing the different possible future activities. In this paper, we address the uncertainty modelling for predicting longterm future activities. Both an action model and a length model are trained to model the probability distribution of the future activities. At test time, we sample from the predicted distributions multiple samples that correspond to the different possible sequences of future activities. Our model is evaluated on two challenging datasets and shows a good performance in capturing the multimodal future activities without compromising the accuracy when predicting a single sequence of future activities.
Revisiting The Coincidence Problem in fR Gravitation ; The energy densities of dark matter DM and dark energy DE are of the same order at the present epoch despite the fact that both these quantities have contrasting characteristics and are presumed to have evolved distinctively with cosmic evolution. This is a major issue in standard LambdaCDM cosmology and is termed The Coincidence Problem which hitherto cannot be explained by any fundamental theory. In this spirit, Bisabr citebisabr reported a cosmological scenario in fR gravity where DM and DE interact and exchange energy with each other and therefore evolve dependently. We investigate the efficiency and model independancy of the technique reported in Bisabr citebisabr in addressing the Coincidence problem with the help of two fR gravity models with model parameters constrained from various observations. Our result confirm the idea that not all scalartensor gravity theories and models can circumvent the Coincidence Problem and any cosmological scenario with interacting fluids is highly model dependent and hence alternate model independent theories and ideas should be nominated to solve this mystery.
An efficient method for computing stationary states of phase field crystal models ; Computing stationary states is an important topic for phase field crystal PFC models. Great efforts have been made for energy dissipation of the numerical schemes when using gradient flows. However, it is always timeconsuming due to the requirement of small effective time steps. In this paper, we propose an adaptive accelerated proximal gradient method for finding the stationary states of PFC models. The energy dissipation is guaranteed and the convergence property is established for the discretized energy functional. Moreover, the connections between generalized proximal operator with classical semiimplicit and explicit schemes for gradient flow are given. Extensive numerical experiments, including two three dimensional periodic crystals in LandauBrazovskii LB model and a two dimensional quasicrystal in LifshitzPetrich LP model, demonstrate that our approach has adaptive time steps which lead to significant acceleration over semiimplicit methods for computing complex structures. Furthermore, our result reveals a deep physical mechanism of the simple LB model via which the sigma phase is first discovered.
Nonasymptotic ClosedLoop System Identification using Autoregressive Processes and Hankel Model Reduction ; One of the primary challenges of system identification is determining how much data is necessary to adequately fit a model. Nonasymptotic characterizations of the performance of system identification methods provide this knowledge. Such characterizations are available for several algorithms performing openloop identification. Often times, however, data is collected in closedloop. Application of openloop identification methods to closedloop data can result in biased estimates. One method used by subspace identification techniques to eliminate these biases involves first fitting a longhorizon autoregressive model, then performing model reduction. The asymptotic behavior of such algorithms is well characterized, but the nonasymptotic behavior is not. This work provides a nonasymptotic characterization of one particular variant of these algorithms. More specifically, we provide nonasymptotic upper bounds on the generalization error of the produced model, as well as high probability bounds on the difference between the produced model and the finite horizon Kalman Filter.
Specializing Unsupervised Pretraining Models for WordLevel Semantic Similarity ; Unsupervised pretraining models have been shown to facilitate a wide range of downstream NLP applications. These models, however, retain some of the limitations of traditional static word embeddings. In particular, they encode only the distributional knowledge available in raw text corpora, incorporated through language modeling objectives. In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on wordlevel semantic similarity into pretraining. To this end, we generalize the standard BERT model to a multitask learning setting where we couple BERT's masked language modeling and next sentence prediction objectives with an auxiliary task of binary word relation classification. Our experiments suggest that our Lexically Informed BERT LIBERT, specialized for the wordlevel semantic similarity, yields better performance than the lexically blind vanilla BERT on several language understanding tasks. Concretely, LIBERT outperforms BERT in 9 out of 10 tasks of the GLUE benchmark and is on a par with BERT in the remaining one. Moreover, we show consistent gains on 3 benchmarks for lexical simplification, a task where knowledge about wordlevel semantic similarity is paramount.
Robustness to Modification with Shared Words in Paraphrase Identification ; Revealing the robustness issues of natural language processing models and improving their robustness is important to their performance under difficult situations. In this paper, we study the robustness of paraphrase identification models from a new perspective via modification with shared words, and we show that the models have significant robustness issues when facing such modifications. To modify an example consisting of a sentence pair, we either replace some words shared by both sentences or introduce new shared words. We aim to construct a valid new example such that a target model makes a wrong prediction. To find a modification solution, we use beam search constrained by heuristic rules, and we leverage a BERT masked language model for generating substitution words compatible with the context. Experiments show that the performance of the target models has a dramatic drop on the modified examples, thereby revealing the robustness issue. We also show that adversarial training can mitigate this issue.
Approaching Machine Learning Fairness through Adversarial Network ; Fairness is becoming a rising concern w.r.t. machine learning model performance. Especially for sensitive fields such as criminal justice and loan decision, eliminating the prediction discrimination towards a certain group of population characterized by sensitive features like race and gender is important for enhancing the trustworthiness of model. In this paper, we present a new general framework to improve machine learning fairness. The goal of our model is to minimize the influence of sensitive feature from the perspectives of both the data input and the predictive model. In order to achieve this goal, we reformulate the data input by removing the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature. We propose to learn the nonsensitive input via sampling among features and design an adversarial network to minimize the dependence between the reformulated input and the sensitive information. Extensive experiments on three benchmark datasets suggest that our model achieve better results than related stateoftheart methods with respect to both fairness metrics and prediction performance.
Collider Phenomenology of a Gluino Continuum ; Continuum supersymmetry is a class of models in which the supersymmetric partners together with part of the standard model come from a conformal sector, broken in the IR near the TeV scale. Such models not only open new doors for addressing the problems of the standard model, but also have unique signatures at hadron colliders, which might explain why we have not yet seen any superpartners at the LHC. Here we use gaugegravity duality to model the conformal sector, generate collider simulations, and finally analyze continuum gluino signatures at the LHC. Due to the increase in the number of jets produced the bounds are weaker than for the minimal supersymmetric standard model with the same gluino mass threshold.
NormLime A New Feature Importance Metric for Explaining Deep Neural Networks ; The problem of explaining deep learning models, and model predictions generally, has attracted intensive interest recently. Many successful approaches forgo global approximations in order to provide more faithful local interpretations of the model's behavior. LIME develops multiple interpretable models, each approximating a large neural network on a small region of the data manifold and SPLIME aggregates the local models to form a global interpretation. Extending this line of research, we propose a simple yet effective method, NormLIME for aggregating local models into global and classspecific interpretations. A human user study strongly favored classspecific interpretations created by NormLIME to other feature importance metrics. Numerical experiments confirm that NormLIME is effective at recognizing important features.
Explainable Product Search with a Dynamic Relation Embedding Model ; Product search is one of the most popular methods for customers to discover products online. Most existing studies on product search focus on developing effective retrieval models that rank items by their likelihood to be purchased. They, however, ignore the problem that there is a gap between how systems and customers perceive the relevance of items. Without explanations, users may not understand why product search engines retrieve certain items for them, which consequentially leads to imperfect user experience and suboptimal system performance in practice. In this work, we tackle this problem by constructing explainable retrieval models for product search. Specifically, we propose to model the search and purchase behavior as a dynamic relation between users and items, and create a dynamic knowledge graph based on both the multirelational product data and the context of the search session. Ranking is conducted based on the relationship between users and items in the latent space, and explanations are generated with logic inferences and entity soft matching on the knowledge graph. Empirical experiments show that our model, which we refer to as the Dynamic Relation Embedding Model DREM, significantly outperforms the stateoftheart baselines and has the ability to produce reasonable explanations for search results.
Text Length Adaptation in Sentiment Classification ; Can a text classifier generalize well for datasets where the text length is different For example, when short reviews are sentimentlabeled, can these transfer to predict the sentiment of long reviews i.e., short to long transfer, or vice versa While unsupervised transfer learning has been wellstudied for cross domainlingual transfer tasks, Cross Length Transfer CLT has not yet been explored. One reason is the assumption that length difference is trivially transferable in classification. We show that it is not, because shortlong texts differ in context richness and word intensity. We devise new benchmark datasets from diverse domains and languages, and show that existing models from similar tasks cannot deal with the unique challenge of transferring across text lengths. We introduce a strong baseline model called BaggedCNN that treats long texts as bags containing short texts. We propose a stateoftheart CLT model called Length Transfer Networks LeTraNets that introduces a twoway encoding scheme for short and long texts using multiple training mechanisms. We test our models and find that existing models perform worse than the BaggedCNN baseline, while LeTraNets outperforms all models.
InterpretML A Unified Framework for Machine Learning Interpretability ; InterpretML is an opensource Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability glassbox models, which are machine learning models designed for interpretability ex linear models, rule lists, generalized additive models, and blackbox explainability techniques for explaining existing systems ex Partial Dependence, LIME. The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a builtin, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from github.commicrosoftinterpret.
NonBayesian Social Learning with Uncertain Models ; NonBayesian social learning theory provides a framework that models distributed inference for a group of agents interacting over a social network. In this framework, each agent iteratively forms and communicates beliefs about an unknown state of the world with their neighbors using a learning rule. Existing approaches assume agents have access to precise statistical models in the form of likelihoods for the state of the world. However in many situations, such models must be learned from finite data. We propose a social learning rule that takes into account uncertainty in the statistical models using secondorder probabilities. Therefore, beliefs derived from uncertain models are sensitive to the amount of past evidence collected for each hypothesis. We characterize how well the hypotheses can be tested on a social network, as consistent or not with the state of the world. We explicitly show the dependency of the generated beliefs with respect to the amount of prior evidence. Moreover, as the amount of prior evidence goes to infinity, learning occurs and is consistent with traditional social learning theory.
Retrofitting Contextualized Word Embeddings with Paraphrases ; Contextualized word embedding models, such as ELMo, generate meaningful representations of words and their context. These models have been shown to have a great impact on downstream applications. However, in many cases, the contextualized embedding of a word changes drastically when the context is paraphrased. As a result, the downstream model is not robust to paraphrasing and other linguistic variations. To enhance the stability of contextualized word embedding models, we propose an approach to retrofitting contextualized embedding models with paraphrase contexts. Our method learns an orthogonal transformation on the input space, which seeks to minimize the variance of word representations on paraphrased contexts. Experiments show that the retrofitted model significantly outperforms the original ELMo on various sentence classification and language inference tasks.
Veronese subsequent analytic solutions of the mathbbCP2s sigma model equations described via Krawtchouk polynomials ; The objective of this paper is to establish a new relationship between the Veronese subsequent analytic solutions of the Euclidean mathbbCP2s sigma model in two dimensions and the orthogonal Krawtchouk polynomials. We show that such solutions of the mathbbCP2s model, defined on the Riemann sphere and having a finite action, can be explicitly parametrised in terms of these polynomials. We apply the obtained results to the analysis of surfaces associated with mathbbCP2s sigma models, defined using the generalized Weierstrass formula for immersion. We show that these surfaces are spheres immersed in the mathfraksu2s1 Lie algebra, and express several other geometrical characteristics in terms of the Krawtchouk polynomials. Finally, a new connection between the mathfraksu2 spins representation and the mathbbCP2s model is explored in detail. It is shown that for any given holomorphic vector function in mathbbC2s1 written as a Veronese sequence, it is possible to derive subsequent solutions of the mathbbCP2s model through algebraic recurrence relations which turn out to be simpler than the analytic relations known in the literature.
Mixout Effective Regularization to Finetune Largescale Pretrained Language Models ; In natural language processing, it has been observed recently that generalization could be greatly improved by finetuning a largescale language model pretrained on a large unlabeled corpus. Despite its recent success and wide adoption, finetuning a large pretrained language model on a downstream task is prone to degenerate performance when there are only a small number of training instances available. In this paper, we introduce a new regularization technique, to which we refer as mixout, motivated by dropout. Mixout stochastically mixes the parameters of two models. We show that our mixout technique regularizes learning to minimize the deviation from one of the two models and that the strength of regularization adapts along the optimization trajectory. We empirically evaluate the proposed mixout and its variants on finetuning a pretrained language model on downstream tasks. More specifically, we demonstrate that the stability of finetuning and the average accuracy greatly increase when we use the proposed approach to regularize finetuning of BERT on downstream tasks in GLUE.
Identifiability in Phylogenetics using Algebraic Matroids ; Identifiability is a crucial property for a statistical model since distributions in the model uniquely determine the parameters that produce them. In phylogenetics, the identifiability of the tree parameter is of particular interest since it means that phylogenetic models can be used to infer evolutionary histories from data. In this paper we introduce a new computational strategy for proving the identifiability of discrete parameters in algebraic statistical models that uses algebraic matroids naturally associated to the models. We then use this algorithm to prove that the tree parameters are generically identifiable for 2tree CFN and K3P mixtures. We also show that the kcycle phylogenetic network parameter is identifiable under the K2P and K3P models.
Reconsidering Analytical Variational Bounds for Output Layers of Deep Networks ; The combination of the reparameterization trick with the use of variational autoencoders has caused a sensation in Bayesian deep learning, allowing the training of realistic generative models of images and has considerably increased our ability to use scalable latent variable models. The reparameterization trick is necessary for models in which no analytical variational bound is available and allows noisy gradients to be computed for arbitrary models. However, for certain standard output layers of a neural network, analytical bounds are available and the variational autoencoder may be used both without the reparameterization trick or the need for any Monte Carlo approximation. In this work, we show that using Jaakola and Jordan bound, we can produce a binary classification layer that allows a Bayesian output layer to be trained, using the standard stochastic gradient descent algorithm. We further demonstrate that a latent variable model utilizing the Bouchard bound for multiclass classification allows for fast training of a fully probabilistic latent factor model, even when the number of classes is very large.
ExpertMatcher Automating ML Model Selection for Users in Resource Constrained Countries ; In this work we introduce ExpertMatcher, a method for automating deep learning model selection using autoencoders. Specifically, we are interested in performing inference on data sources that are distributed across many clients using pretrained expert ML networks on a centralized server. The ExpertMatcher assigns the most relevant models in the central server given the client's data representation. This allows resourceconstrained clients in developing countries to utilize the most relevant ML models for their given task without having to evaluate the performance of each ML model. The method is generic and can be beneficial in any setup where there are local clients and numerous centralized expert ML models.
Insider Threat Detection via Hierarchical Neural Temporal Point Processes ; Insiders usually cause significant losses to organizations and are hard to detect. Currently, various approaches have been proposed to achieve insider threat detection based on analyzing the audit data that record information of the employee's activity type and time. However, the existing approaches usually focus on modeling the users' activity types but do not consider the activity time information. In this paper, we propose a hierarchical neural temporal point process model by combining the temporal point processes and recurrent neural networks for insider threat detection. Our model is capable of capturing a general nonlinear dependency over the history of all activities by the twolevel structure that effectively models activity times, activity types, session durations, and session intervals information. Experimental results on two datasets demonstrate that our model outperforms the models that only consider information of the activity types or time alone.
Spontaneously Breaking NonAbelian Gauge Symmetry in NonHermitian Field Theories ; We generalise our previous formulation of gaugeinvariant PTsymmetric field theories to include models with nonAbelian symmetries and discuss the extension to such models of the EnglertBroutHiggsKibble mechanism for generating masses for vector bosons. As in the Abelian case, the nonAbelian gauge fields are coupled to nonconserved currents. We present a consistent scheme for gauge fixing, demonstrating BecchiRouetStoraTyutin invariance, and show that the particle spectrum and interactions are gauge invariant. We exhibit the masses that gauge bosons in the simplest twodoublet SU2xU1 model acquire when certain scalar fields develop vacuum expectation values they and scalar masses depend quartically on the nonHermitian mass parameter mu. The bosonic mass spectrum differs substantially from that in a Hermitian twodoublet model. This nonHermitian extension of the Standard Model opens a new direction for particle model building, with distinctive predictions to be explored further.
Breathing deformation model application to multiresolution abdominal MRI ; Dynamic MRI is a technique of acquiring a series of images continuously to follow the physiological changes over time. However, such fast imaging results in low resolution images. In this work, abdominal deformation model computed from dynamic low resolution images have been applied to high resolution image, acquired previously, to generate dynamic high resolution MRI. Dynamic low resolution images were simulated into different breathing phases inhale and exhale. Then, the image registration between breathing time points was performed using the Bspline SyN deformable model and using crosscorrelation as a similarity metric. The deformation model between different breathing phases were estimated from highly undersampled data. This deformation model was then applied to the high resolution images to obtain high resolution images of different breathing phases. The results indicated that the deformation model could be computed from relatively very low resolution images.
Robust Likelihood Ratio Tests for Incomplete Economic Models ; This study develops a framework for testing hypotheses on structural parameters in incomplete models. Such models make setvalued predictions and hence do not generally yield a unique likelihood function. The model structure, however, allows us to construct tests based on the least favorable pairs of likelihoods using the theory of Huber and Strassen 1973. We develop tests robust to model incompleteness that possess certain optimality properties. We also show that sharp identifying restrictions play a role in constructing such tests in a computationally tractable manner. A framework for analyzing the local asymptotic power of the tests is developed by embedding the least favorable pairs into a model that allows local approximations under the limits of experiments argument. Examples of the hypotheses we consider include those on the presence of strategic interaction effects in discrete games of complete information. Monte Carlo experiments demonstrate the robust performance of the proposed tests.
Efficient Intrinsically Motivated Robotic Grasping with LearningAdaptive Imagination in Latent Space ; Combining modelbased and modelfree deep reinforcement learning has shown great promise for improving sample efficiency on complex control tasks while still retaining high performance. Incorporating imagination is a recent effort in this direction inspired by human mental simulation of motor behavior. We propose a learningadaptive imagination approach which, unlike previous approaches, takes into account the reliability of the learned dynamics model used for imagining the future. Our approach learns an ensemble of disjoint local dynamics models in latent space and derives an intrinsic reward based on learning progress, motivating the controller to take actions leading to data that improves the models. The learned models are used to generate imagined experiences, augmenting the training set of real experiences. We evaluate our approach on learning visionbased robotic grasping and show that it significantly improves sample efficiency and achieves nearoptimal performance in a sparse reward environment.
Parallelized Training of Restricted Boltzmann Machines using MarkovChain Monte Carlo Methods ; Restricted Boltzmann Machine RBM is a generative stochastic neural network that can be applied to collaborative filtering technique used by recommendation systems. Prediction accuracy of the RBM model is usually better than that of other models for recommendation systems. However, training the RBM model involves MarkovChain Monte Carlo MCMC method, which is computationally expensive. In this paper, we have successfully applied distributed parallel training using Horovod framework to improve the training time of the RBM model. Our tests show that the distributed training approach of the RBM model has a good scaling efficiency. We also show that this approach effectively reduces the training time to little over 12 minutes on 64 CPU nodes compared to 5 hours on a single CPU node. This will make RBM models more practically applicable in recommendation systems.
Updating Pretrained Word Vectors and Text Classifiers using Monolingual Alignment ; In this paper, we focus on the problem of adapting word vectorbased models to new textual data. Given a model pretrained on large reference data, how can we adapt it to a smaller piece of data with a slightly different language distribution We frame the adaptation problem as a monolingual word vector alignment problem, and simply average models after alignment. We align vectors using the RCSLS criterion. Our formulation results in a simple and efficient algorithm that allows adapting generalpurpose models to changing word distributions. In our evaluation, we consider applications to word embedding and text classification models. We show that the proposed approach yields good performance in all setups and outperforms a baseline consisting in finetuning the model on new data.
Arbitrarily Highorder Linear Schemes for Gradient Flow Models ; We present a paradigm for developing arbitrarily high order, linear, unconditionally energy stable numerical algorithms for gradient flow models. We apply the energy quadratization EQ technique to reformulate the general gradient flow model into an equivalent gradient flow model with a quadratic free energy and a modified mobility. Given solutions up to tnn Delta t with Delta t the time step size, we linearize the EQreformulated gradient flow model in tn, tn1 by extrapolation. Then we employ an algebraically stable RungeKutta method to discretize the linearized model in tn, tn1. Then we use the Fourier pseudospectral method for the spatial discretization to match the order of accuracy in time. The resulting fully discrete scheme is linear, unconditionally energy stable, uniquely solvable, and may reach arbitrarily high order. Furthermore, we present a family of linear schemes based on predictioncorrection methods to complement the new linear schemes. Some benchmark numerical examples are given to demonstrate the accuracy and efficiency of the schemes.
When Two are Better than One Modeling the Mechanisms of Antibody Mixtures ; It is difficult to predict how antibodies will behave when mixed together, even after each has been independently characterized. Here, we present a statistical mechanical model for the activity of antibody mixtures that accounts for whether pairs of antibodies bind to distinct or overlapping epitopes. This model requires measuring n individual antibodies and their nn12 pairwise interactions to predict the 2n potential combinations. We apply this model to epidermal growth factor receptor EGFR antibodies and find that the activity of antibody mixtures can be predicted without positing synergy at the molecular level. In addition, we demonstrate how the model can be used in reverse, where straightforward experiments measuring the activity of antibody mixtures can be used to infer the molecular interactions between antibodies. Lastly, we generalize this model to analyze engineered multidomain antibodies, where components of different antibodies are tethered together to form novel amalgams, and characterize how well it predicts recently designed influenza antibodies.
Robustness of Delta Hedging in a JumpDiffusion Model ; Suppose an investor aims at Delta hedging a European contingent claim hST in a jumpdiffusion model, but incorrectly specifies the stock price's volatility and jump sensitivity, so that any hedging strategy is calculated under a misspecified model. When does the erroneously computed strategy superreplicate the true claim in an appropriate sense If the misspecified volatility and jump sensitivity dominate the true ones, we show that following the misspecified Delta strategy does superreplicate hST in expectation among a wide collection of models. We also show that if a robust pricing operator with a whole class of models is used, the corresponding hedge is dominating the contingent claim under each model in expectation. Our results rely on proving stochastic flow properties of the jumpdiffusion and the convexity of the value function. In the pure Poisson case, we establish that an overestimation of the jump sensitivity results in an almost sure onesided hedge. Moreover, in general the misspecified price of the option dominates the true one if the volatility and the jump sensitivity are overestimated.
A pathsampling method to partially identify causal effects in instrumental variable models ; Partial identification approaches are a flexible and robust alternative to standard pointidentification approaches in general instrumental variable models. However, this flexibility comes at the cost of a curse of cardinality'' the number of restrictions on the identified set grows exponentially with the number of points in the support of the endogenous treatment. This article proposes a novel pathsampling approach to this challenge. It is designed for partially identifying causal effects of interest in the most complex models with continuous endogenous treatments. A stochastic process representation allows to seamlessly incorporate assumptions on individual behavior into the model. Some potential applications include doseresponse estimation in randomized trials with imperfect compliance, the evaluation of social programs, welfare estimation in demand models, and continuous choice models. As a demonstration, the method provides informative nonparametric bounds on household expenditures under the assumption that expenditure is continuous. The mathematical contribution is an approach to approximately solving infinite dimensional linear programs on path spaces via sampling.
Stellar Density Profiles of Dwarf Spheroidal Galaxies ; We apply a flexible parametric model, a combination of generalized Plummer profiles, to infer the shapes of the stellar density profiles of the Milky Way's satellite dwarf spheroidal galaxies dSphs. We apply this model to 40 dSphs using star counts from the Sloan Digital Sky Survey, PanStarrs1 Survey, Dark Energy Survey, and Dark Energy Camera Legacy Survey. Using mock data, we examine systematic errors associated with modelling assumptions and identify conditions under which our model can identify nonstandard' stellar density profiles that have central cusps andor steepened outer slopes. Applying our model to real dwarf spheroidals, we do not find evidence for centrally cusped density profiles among the fifteen Milky Way satellites for which our tests with mock data indicate there would be sufficient detectability. We do detect steepened with respect to a standard Plummer model outer profiles in several dSphsFornax, Leo I, Leo II, and Reticulum IIwhich may point to distinct evolutionary pathways for these objects. However, the outer slope of the stellar density profile does not yet obviously correlate with other observed galaxy properties.
Gradient dynamics model for drops spreading on polymer brushes ; When a liquid drop spreads on an adaptive substrate the latter changes its properties what may result in an intricate coupled dynamics of drop and substrate. Here we present a generic mesoscale hydrodynamic model for such processes that is written as a gradient dynamics on an underlying energy functional. We specify the model details for the example of a drop spreading on a dry polymer brush. There, liquid absorption into the brush results in swelling of the brush causing changes in the brush topography and wettability. The liquid may also advance within the brush via diffusion or wicking resulting in coupled drop and brush dynamics. The specific model accounts for coupled spreading, absorption and wicking dynamics when the underlying energy functional incorporates capillarity, wettability and brush energy. After employing a simple version of such a model to numerically simulate a droplet spreading on a swelling brush we conclude with a discussion of possible model extensions.
CrossChannel Intragroup Sparsity Neural Network ; Modern deep neural networks rely on overparameterization to achieve stateoftheart generalization. But overparameterized models are computationally expensive. Network pruning is often employed to obtain less demanding models for deployment. Finegrained pruning removes individual weights in parameter tensors and can achieve a high model compression ratio with little accuracy degradation. However, it introduces irregularity into the computing dataflow and often does not yield improved model inference efficiency in practice. Coarsegrained model pruning, while realizing satisfactory inference speedup through removal of network weights in groups, e.g. an entire filter, often lead to significant accuracy degradation. This work introduces the crosschannel intragroup CCI sparsity structure, which can prevent the inference inefficiency of finegrained pruning while maintaining outstanding model performance. We then present a novel training algorithm designed to perform well under the constraint imposed by the CCISparsity. Through a series of comparative experiments we show that our proposed CCISparsity structure and the corresponding pruning algorithm outperform prior art in inference efficiency by a substantial margin given suited hardware acceleration in the future.
On the asymptotic distribution of model averaging based on information criterion ; Smoothed AIC SAIC and Smoothed BIC SBIC are very widely used in model averaging and are very easily to implement. Especially, the optimal model averaging method MMA and JMA have only been well developed in linear models. Only by modifying, they can be applied to other models. But SAIC and SBIC can be used in all situations where AIC and BIC can be calculated. In this paper, we study the asymptotic behavior of two commonly used model averaging estimators, the SAIC and SBIC estimators, under the standard asymptotic with general fixed parameter setup. In addition, the resulting coverage probability in Buckland et al. 1997 is not studied accurately, but it is claimed that it will be close to the intended. Our derivation make it possible to study accurately. Besides, we also prove that the confidence interval construction method in Hjort and Claeskens 2003 still works in linear regression with normal distribution error. Both the simulation and applied example support our theory conclusion.
Divide, Conquer, and Combine a New Inference Strategy for Probabilistic Programs with Stochastic Support ; Universal probabilistic programming systems PPSs provide a powerful framework for specifying rich probabilistic models. They further attempt to automate the process of drawing inferences from these models, but doing this successfully is severely hampered by the wide range of nonstandard models they can express. As a result, although one can specify complex models in a universal PPS, the provided inference engines often fall far short of what is required. In particular, we show that they produce surprisingly unsatisfactory performance for models where the support varies between executions, often doing no better than importance sampling from the prior. To address this, we introduce a new inference framework Divide, Conquer, and Combine, which remains efficient for such models, and show how it can be implemented as an automated and generic PPS inference engine. We empirically demonstrate substantial performance improvements over existing approaches on three examples.
Contrast data mining for the MSSM from strings ; We apply techniques from data mining to the heterotic orbifold landscape in order to identify new MSSMlike string models. To do so, socalled contrast patterns are uncovered that help to distinguish between areas in the landscape that contain MSSMlike models and the rest of the landscape. First, we develop these patterns in the wellknown mathbbZ6II orbifold geometry and then we generalize them to all other mathbbZN orbifold geometries. Our contrast patterns have a clear physical interpretation and are easy to check for a given string model. Hence, they can be used to scale down the potentially interesting area in the landscape, which significantly enhances the search for MSSMlike models. Thus, by deploying the knowledge gain from contrast mining into a new search algorithm we create many novel MSSMlike models, especially in corners of the landscape that were hardly accessible by the conventional search algorithm, for example, MSSMlike mathbbZ6II models with Delta54 flavor symmetry.
Evaluation of Surrogate Models for Multifin Flapping Propulsion Systems ; The aim of this study is to develop surrogate models for quick, accurate prediction of thrust forces generated through flapping fin propulsion for given operating conditions and fin geometries. Different network architectures and configurations are explored to model the training data separately for the lead fin and rear fin of a tandem fin setup. We progressively improve the data representation of the input parameter space for model predictions. The models are tested on three unseen fin geometries and the predictions validated with computational fluid dynamics CFD data. Finally, the orders of magnitude gains in computational performance of these surrogate models, compared to experimental and CFD runs, vs their tradeoff with accuracy is discussed within the context of this tandem fin configuration.
Cosmological constrains on minimally and nonminimally coupled scalar field models ; We study the minimally and nonminimally coupled scalar field models as possible alternatives for dark energy, the mysterious energy component that is driving the accelerated expansion of the universe. After discussing the dynamics at both the background and perturbation level, we confront the two models with the latest cosmological data. After obtaining updated constraints on their parameters we perform model selection using the basic information criteria. We found that the LambdaCDM model is strongly favored when the local determination of the Hubble constant is not considered and that this statement is weakened once local H0 is included in the analysis. We calculate the parameter combination S8sigma8sqrtOmegam0.3 and show the decrement of the tension with respect to the Planck results in the case of minimally and nonminimally coupled scalar field models. Finally, for the coupling constant between DE and gravity, we obtain the constraint xisimeq 0.060.190.19, approaching the one from solar system tests xi lesssim 102 and comparable to the conformal value xi16 at 1sigma uncertainty.
A Robust DataDriven Approach for Dialogue State Tracking of Unseen Slot Values ; A Dialogue State Tracker is a key component in dialogue systems which estimates the beliefs of possible user goals at each dialogue turn. Deep learning approaches using recurrent neural networks have shown stateoftheart performance for the task of dialogue state tracking. Generally, these approaches assume a predefined candidate list and struggle to predict any new dialogue state values that are not seen during training. This makes extending the candidate list for a slot without model retaining infeasible and also has limitations in modelling for low resource domains where training data for slot values are expensive. In this paper, we propose a novel dialogue state tracker based on copying mechanism that can effectively track such unseen slot values without compromising performance on slot values seen during training. The proposed model is also flexible in extending the candidate list without requiring any retraining or change in the model. We evaluate the proposed model on various benchmark datasets DSTC2, DSTC3 and WoZ2.0 and show that our approach, outperform other endtoend datadriven approaches in tracking unseen slot values and also provides significant advantages in modelling for DST.
InSpectre Breaking and Fixing Microarchitectural Vulnerabilities by Formal Analysis ; The recent Spectre attacks has demonstrated the fundamental insecurity of current computer microarchitecture. The attacks use features like pipelining, outoforder and speculation to extract arbitrary information about the memory contents of a process. A comprehensive formal microarchitectural model capable of representing the forms of outoforder and speculative behavior that can meaningfully be implemented in a high performance pipelined architecture has not yet emerged. Such a model would be very useful, as it would allow the existence and nonexistence of vulnerabilities, and soundness of countermeasures to be formally established. In this paper we present such a model targeting single core processors. The model is intentionally very general and provides an infrastructure to define models of real CPUs. It incorporates microarchitectural features that underpin all known Spectre vulnerabilities. We use the model to elucidate the security of existing and new vulnerabilities, as well as to formally analyze the effectiveness of proposed countermeasures. Specifically, we discover three new potential vulnerabilities, including a new variant of Spectre v4, a vulnerability on speculative fetching, and a vulnerability on outoforder execution, and analyze the effectiveness of three existing countermeasures constant time, Retpoline, and ARM's Speculative Store Bypass Safe SSBS.
Improvements for driftdiffusion plasma fluid models with explicit time integration ; Driftdiffusion plasma fluid models are commonly used to simulate electric discharges. Such models can computationally be very efficient if they are combined with explicit time integration. This paper deals with two issues that often arise with such models. First, a high plasma conductivity can severely limit the time step. A fully explicit method to overcome this limitation is presented. This method is compared to the existing semiimplicit method, and it is shown to have several advantages. A second issue is specific to models with the local field approximation. Near strong density and electric field gradients, electrons can diffuse parallel to the field, and unphysically generate ionization. Existing and new approaches to correct this behavior are compared. Details on the implementation of the models and the various approaches are provided.
Invariants of models of genus one curves and modular forms ; An invariant of a model of genus one curve is a polynomial in the coefficients of the model that is stable under certain linear transformations. The classical example of an invariant is the discriminant, which characterizes the singularity of models. The ring of invariants of genus one models over a field is generated by two elements. Fisher normalized these invariants for models of degree n2,3,4 in such a way that these invariants are moreover defined over the integers. We provide an alternative way to express these normalized invariants using modular forms. This method relies on a direct computation for the discriminants based on their own geometric properties.
Guiding NonAutoregressive Neural Machine Translation Decoding with Reordering Information ; Nonautoregressive neural machine translation NAT generates each target word in parallel and has achieved promising inference acceleration. However, existing NAT models still have a big gap in translation quality compared to autoregressive neural machine translation models due to the enormous decoding space. To address this problem, we propose a novel NAT framework named ReorderNAT which explicitly models the reordering information in the decoding procedure. We further introduce deterministic and nondeterministic decoding strategies that utilize reordering information to narrow the decoding search space in our proposed ReorderNAT. Experimental results on various widelyused datasets show that our proposed model achieves better performance compared to existing NAT models, and even achieves comparable translation quality as autoregressive translation models with a significant speedup.
Iterative Estimation of Mixed Exponential Random Graph Models with Nodal Random Effects ; The presence of unobserved node specific heterogeneity in Exponential Random Graph Models ERGM is a general concern, both with respect to model validity as well as estimation instability. We therefore extend the ERGM by including node specific random effects that account for unobserved heterogeneity in the network. This leads to a mixed model with parametric as well as random coefficients, labelled as mixed ERGM. Estimation is carried out by combining approximate penalized pseudolikelihood estimation for the random effects with maximum likelihood estimation for the remaining parameters in the model. This approach provides a stable algorithm, which allows to fit nodal heterogeneity effects even for large scale networks. We also propose model selection based on the AIC to check for node specific heterogeneity.
Microscopic Model building for Black Hole Membranes from Constraints of Symmetry ; Einstein equations projected on blackhole horizons give rise to the equations of motion of a viscous fluid. This suggests a way to understand the microscopic degrees of freedom on the blackhole horizon by focusing on the physics of this fluid. In this talk, we shall approach this problem by building a crude microscopic model for the HorizonfluidHF corresponding to asymptotically flat blackholes in 31 dimensions. The symmetry requirement for our model is that it should incorporate the S1 diffeosymmetry on the blackhole horizon. The second constraint comes from the demand that the correct value of the Coefficient of the Bulk Viscosity of the HF can be deduced from the model. Both these requirements can be satisfied by an adoption of the eight vertex Baxter model on a S2 surface. We show that the adiabatic entropy quantisation proposed by Bekenstein also follows from this model. Finally, we argue the results obtained so far suggest that a perturbed blackhole can be described by a CFT perturbed by relevant operators and discuss the physical implications.
Deep Transfer Learning for Thermal Dynamics Modeling in Smart Buildings ; Thermal dynamics modeling has been a critical issue in building heating, ventilation, and airconditioning HVAC systems, which can significantly affect the control and maintenance strategies. Due to the uniqueness of each specific building, traditional thermal dynamics modeling approaches heavily depending on physics knowledge cannot generalize well. This study proposes a deep supervised domain adaptation DSDA method for thermal dynamics modeling of building indoor temperature evolution and energy consumption. A long short term memory network based Sequence to Sequence scheme is pretrained based on a large amount of data collected from a building and then adapted to another building which has a limited amount of data by applying the model finetuning. We use four publicly available datasets SML and AHU for temperature evolution, longterm datasets from two different commercial buildings, termed as Building 1 and Building 2 for energy consumption. We show that the deep supervised domain adaptation is effective to adapt the pretrained model from one building to another building and has better predictive performance than learning from scratch with only a limited amount of data.
Robust Unsupervised Audiovisual Speech Enhancement Using a Mixture of Variational Autoencoders ; Recently, an audiovisual speech generative model based on variational autoencoder VAE has been proposed, which is combined with a nonnegative matrix factorization NMF model for noise variance to perform unsupervised speech enhancement. When visual data is clean, speech enhancement with audiovisual VAE shows a better performance than with audioonly VAE, which is trained on audioonly data. However, audiovisual VAE is not robust against noisy visual data, e.g., when for some video frames, speaker face is not frontal or lips region is occluded. In this paper, we propose a robust unsupervised audiovisual speech enhancement method based on a perframe VAE mixture model. This mixture model consists of a trained audioonly VAE and a trained audiovisual VAE. The motivation is to skip noisy visual frames by switching to the audioonly VAE model. We present a variational expectationmaximization method to estimate the parameters of the model. Experiments show the promising performance of the proposed method.
An nlmodel with radiative transfer for hydrogen recombination line masers ; Atomic hydrogen masers occur in recombination plasmas in sufficiently dense HII regions. These hydrogen recombination line HRL masers have been observed in a handful of objects to date and the analysis of the atomic physics involved has been rudimentary. In this work a new model of HRL masers is presented which uses an nlmodel to describe the atomic populations interacting with freefree radiation from the plasma, and an escape probability framework to deal with radiative transfer effects. The importance of including the collisions between angular momentum quantum states and the freefree emission in models of HRL masers is demonstrated. The model is used to describe the general behaviour of radiative transfer of HRLs and to investigate the conditions under which HRL masers form. The model results show good agreement with observations collected over a broad range of frequencies. Theoretical predictions are made regarding the ratio of recombination lines from the same upper quantum level for these objects.
Data Economy for Prosumers in a Smart Grid Ecosystem ; Smart grids technologies are enablers of new business models for domestic consumers with local flexibility generation, loads, storage and where access to data is a key requirement in the value stream. However, legislation on personal data privacy and protection imposes the need to develop local models for flexibility modeling and forecasting and exchange models instead of personal data. This paper describes the functional architecture of an home energy management system HEMS and its optimization functions. A set of datadriven models, embedded in the HEMS, are discussed for improving renewable energy forecasting skill and modeling multiperiod flexibility of distributed energy resources.
Efficient Animation of Sparse Voxel Octrees for RealTime Ray Tracing ; A considerable limitation of employing sparse voxels octrees SVOs as a model format for ray tracing has been that the octree data structure is inherently static. Due to traversal algorithms' dependence on the strict hierarchical structure of octrees, it has been challenging to achieve realtime performance of SVO model animation in ray tracing since the octree data structure would typically have to be regenerated every frame. Presented in this article is a novel method for animation of models specified on the SVO format. The method distinguishes itself by permitting model transformations such as rotation, translation, and anisotropic scaling, while preserving the hierarchical structure of SVO models so that they may be efficiently traversed. Due to its modest memory footprint and straightforward arithmetic operations, the method is wellsuited for implementation in hardware. A software ray tracing implementation of animated SVO models demonstrates realtime performance on currentgeneration desktop GPUs, and shows that the animation method does not substantially slow down the rendering procedure compared to rendering static SVOs.
Training a codeswitching language model with monolingual data ; A lack of codeswitching data complicates the training of codeswitching CS language models. We propose an approach to train such CS language models on monolingual data only. By constraining and normalizing the output projection matrix in RNNbased language models, we bring embeddings of different languages closer to each other. Numerical and visualization results show that the proposed approaches remarkably improve the performance of CS language models trained on monolingual data. The proposed approaches are comparable or even better than training CS language models with artificially generated CS data. We additionally use unsupervised bilingual word translation to analyze whether semantically equivalent words in different languages are mapped together.
A Discriminative Gaussian Mixture Model with Sparsity ; In probabilistic classification, a discriminative model based on the softmax function has a potential limitation in that it assumes unimodality for each class in the feature space. The mixture model can address this issue, although it leads to an increase in the number of parameters. We propose a sparse classifier based on a discriminative GMM, referred to as a sparse discriminative Gaussian mixture SDGM. In the SDGM, a GMMbased discriminative model is trained via sparse Bayesian learning. Using this sparse learning framework, we can simultaneously remove redundant Gaussian components and reduce the number of parameters used in the remaining components during learning; this learning method reduces the model complexity, thereby improving the generalization capability. Furthermore, the SDGM can be embedded into neural networks NNs, such as convolutional NNs, and can be trained in an endtoend manner. Experimental results demonstrated that the proposed method outperformed the existing softmaxbased discriminative models.
Unsupervised Visual Representation Learning with Increasing Object Shape Bias ; Very early draftTraditional supervised learning keeps pushing convolution neural networkCNN achieving stateofart performance. However, lack of largescale annotation data is always a big problem due to the high cost of it, even ImageNet dataset is overfitted by complex models now. The success of unsupervised learning method represented by the Bert model in natural language processingNLP field shows its great potential. And it makes that unlimited training samples becomes possible and the great universal generalization ability changes NLP research direction directly. In this article, we purpose a novel unsupervised learning method based on contrastive predictive coding. Under that, we are able to train model with any nonannotation images and improve model's performance to reach stateofart performance at the same level of model complexity. Beside that, since the number of training images could be unlimited amplification, an universal largescale pretrained computer vision model is possible in the future.
Exploiting Token and Pathbased Representations of Code for Identifying SecurityRelevant Commits ; Public vulnerability databases such as CVE and NVD account for only 60 of security vulnerabilities present in opensource projects, and are known to suffer from inconsistent quality. Over the last two years, there has been considerable growth in the number of known vulnerabilities across projects available in various repositories such as NPM and Maven Central. Such an increasing risk calls for a mechanism to infer the presence of security threats in a timely manner. We propose novel hierarchical deep learning models for the identification of securityrelevant commits from either the commit diff or the source code for the Java classes. By comparing the performance of our model against code2vec, a stateoftheart model that learns from pathbased representations of code, and a logistic regression baseline, we show that deep learning models show promising results in identifying securityrelated commits. We also conduct a comparative analysis of how various deep learning models learn across different input representations and the effect of regularization on the generalization of our models.
ThickNet Parallel Network Structure for Sequential Modeling ; Recurrent neural networks have been widely used in sequence learning tasks. In previous studies, the performance of the model has always been improved by either wider or deeper structures. However, the former becomes more prone to overfitting, while the latter is difficult to optimize. In this paper, we propose a simple new model named ThickNet, by expanding the network from another dimension thickness. Multiple parallel values are obtained via more sets of parameters in each hidden state, and the maximum value is selected as the final output among parallel intermediate outputs. Notably, ThickNet can efficiently avoid overfitting, and is easier to optimize than the vanilla structures due to the large dropout affiliated with it. Our model is evaluated on four sequential tasks including adding problem, permuted sequential MNIST, text classification and language modeling. The results of these tasks demonstrate that our model can not only improve accuracy with faster convergence but also facilitate a better generalization ability.
Horizon thermodynamics in holographic cosmological models with a powerlaw term ; Thermodynamics on the horizon of a flat universe at late times is studied in holographic cosmological models that assume an associated entropy on the horizon. In such models, a Lambdat model similar to a timevarying Lambdat cosmology is favored because of the consistency of energy flows across the horizon. Based on this consistency, a Lambdat model with a powerlaw term proportional to Halpha is formulated to systematically examine the evolution of the BekensteinHawking entropy. Here, H is the Hubble parameter and alpha is a free parameter whose value is a real number. The present model always satisfies the second law of thermodynamics on the horizon. In particular, the universe for alpha 2 tends to approach thermodynamic equilibriumlike states. Consequently, when alpha 2, the maximization of the entropy should be satisfied as well, at least in the last stage of the evolution of an expanding universe. A relaxationlike process before the last stage is also examined from a thermodynamics viewpoint.
Basic Ideas and Tools for ProjectionBased Model Reduction of Parametric Partial Differential Equations ; We provide first the functional analysis background required for reduced order modeling and present the underlying concepts of reduced basis model reduction. The projectionbased model reduction framework under affinity assumptions, offlineonline decomposition and error estimation is introduced. Several tools for geometry parametrizations, such as free form deformation, radial basis function interpolation and inverse distance weighting interpolation are explained. The empirical interpolation method is introduced as a general tool to deal with nonaffine parameter dependency and nonlinear problems. The discrete and matrix versions of the empirical interpolation are considered as well. Active subspaces properties are discussed to reduce highdimensional parameter spaces as a preprocessing step. Several examples illustrate the methodologies.
A Flexible MixedFrequency Vector Autoregression with a SteadyState Prior ; We propose a Bayesian vector autoregressive VAR model for mixedfrequency data. Our model is based on the meanadjusted parametrization of the VAR and allows for an explicit prior on the 'steady states' unconditional means of the included variables. Based on recent developments in the literature, we discuss extensions of the model that improve the flexibility of the modeling approach. These extensions include a hierarchical shrinkage prior for the steadystate parameters, and the use of stochastic volatility to model heteroskedasticity. We put the proposed model to use in a forecast evaluation using US data consisting of 10 monthly and 3 quarterly variables. The results show that the predictive ability typically benefits from using mixedfrequency data, and that improvements can be obtained for both monthly and quarterly variables. We also find that the steadystate prior generally enhances the accuracy of the forecasts, and that accounting for heteroskedasticity by means of stochastic volatility usually provides additional improvements, although not for all variables.
Existence of nodal line semimetal in a generalized three dimensional Haldane model ; We construct and study a time reversal broken tight binding model on diamond lattice with complex nextnearestneighbour hopping which can be thought of as a generalisation of two dimensional Haldane model in three dimension. The model also breaks inversion symmetry owing to sublattice dependent chemical potential. We calculate the spectrum of the model and find the existence of six pairs of anisotropic gapless points with linear dependence on momentum. The coordinates of the gapless points are 2 pi, pi pm k0,0, 2 pi, pi pm k0,2 pi and their possible permutations . The condition for gapless spectrum is very similar to the two dimensional case. Each gapless points are having well defined chirality and in the gapless phase specific set of planes have nonzero Chern number. The gapped phase is a trivial bulk insulator which has vanishing Chern number as well as Hopf index. The model belongs to the symmetry class AIII according to the tenfold way of classification. Surprisingly the gapless phase does contain a gapped surface state where as the gapped state has a gapless surface states as found in 1,1,1 direction.
Insider threat modeling An adversarial risk analysis approach ; Insider threats entail major security issues in geopolitics, cyber risk management and business organization. The game theoretic models proposed so far do not take into account some important factors such as the organisational culture and whether the attacker was detected or not. They also fail to model the defensive mechanisms already put in place by an organisation to mitigate an insider attack. We propose two new models which incorporate these settings and hence are more realistic. Most earlier work in the field has focused on standard game theoretic approaches to find the solutions. We use the adversarial risk analysis ARA approach to find the solution to our models. ARA does not assume common knowledge and solves the problem from the point of view of one of the players, taking into account their knowledge and uncertainties regarding the choices available to them, to their adversaries, the possible outcomes, their utilities and their opponents' utilities. Our models and the ARA solutions are general and can be applied to most insider threat scenarios. A data security example illustrates the discussion.
Regularized and Smooth Double Core Tensor Factorization for Heterogeneous Data ; We introduce a general tensor model suitable for data analytic tasks for em heterogeneous datasets, wherein there are joint lowrank structures within groups of observations, but also discriminative structures across different groups. To capture such complex structures, a double core tensor DCOT factorization model is introduced together with a family of smoothing loss functions. By leveraging the proposed smoothing function, the model accurately estimates the model factors, even in the presence of missing entries. A linearized ADMM method is employed to solve regularized versions of DCOT factorizations, that avoid large tensor operations and large memory storage requirements. Further, we establish theoretically its global convergence, together with consistency of the estimates of the model parameters. The effectiveness of the DCOT model is illustrated on several realworld examples including image completion, recommender systems, subspace clustering and detecting modules in heterogeneous Omics multimodal data, since it provides more insightful decompositions than conventional tensor methods.
Finite Element Simulations of an ElastoViscoplastic Model for Clay ; In this paper, we develop an elastoviscoplastic EVP model for clay using the nonassociated flow rule. This is accomplished by using a modified form of the Perzyna's overstressed EVP theory, the critical state soil mechanics, and the multisurface theory. The new model includes six parameters, five of which are identical to those in the critical state soil mechanics model. The other parameter is the generalized nonlinear secondary compression index. The EVP model was implemented in a nonlinear coupled consolidated code using a finiteelement numerical algorithm AFENA. We then tested the model for different clays, such as the Osaka clay, the San Francisco Bay Mud clay, the Kaolin clay, and the Hong Kong Marine Deposit clay. The numerical results show good agreement with the experimental data.
Unsupervised Neural Mask Estimator For Generalized EigenValue Beamforming Based ASR ; The stateofart methods for acoustic beamforming in multichannel ASR are based on a neural mask estimator that predicts the presence of speech and noise. These models are trained using a paired corpus of clean and noisy recordings teacher model. In this paper, we attempt to move away from the requirements of having supervised clean recordings for training the mask estimator. The models based on signal enhancement and beamforming using multichannel linear prediction serve as the required mask estimate. In this way, the model training can also be carried out on real recordings of noisy speech rather than simulated ones alone done in a typical teacher model. Several experiments performed on noisy and reverberant environments in the CHiME3 corpus as well as the REVERB challenge corpus highlight the effectiveness of the proposed approach. The ASR results for the proposed approach provide performances that are significantly better than a teacher model trained on an outofdomain dataset and on par with the oracle mask estimators trained on the indomain dataset.
Generalized Guerra's interpolation schemes for dense associative neural networks ; In this work we develop analytical techniques to investigate a broad class of associative neural networks set in the highstorage regime. These techniques translate the original statisticalmechanical problem into an analyticalmechanical one which implies solving a set of partial differential equations, rather than tackling the canonical probabilistic route. We test the method on the classical Hopfield model where the cost function includes only twobody interactions i.e., quadratic terms and on the relativistic Hopfield model where the expansion of the cost function includes pbody i.e., of degree p contributions. Under the replica symmetric assumption, we paint the phase diagrams of these models by obtaining the explicit expression of their free energy as a function of the model parameters i.e., noise level and memory storage. Further, since for nonpairwise models ergodicity breaking is non necessarily a critical phenomenon, we develop a fluctuation analysis and find that criticality is preserved in the relativistic model.
Learning Structured Representations of Spatial and Interactive Dynamics for Trajectory Prediction in Crowded Scenes ; Context plays a significant role in the generation of motion for dynamic agents in interactive environments. This work proposes a modular method that utilises a learned model of the environment for motion prediction. This modularity explicitly allows for unsupervised adaptation of trajectory prediction models to unseen environments and new tasks by relying on unlabelled image data only. We model both the spatial and dynamic aspects of a given environment alongside the per agent motions. This results in more informed motion prediction and allows for performance comparable to the stateoftheart. We highlight the model's prediction capability using a benchmark pedestrian prediction problem and a robot manipulation task and show that we can transfer the predictor across these tasks in a completely unsupervised way. The proposed approach allows for robust and label efficient forward modelling, and relaxes the need for full model retraining in new environments.
On MultiCascade Influence Maximization Model, Hardness and Algorithmic Framework ; This paper studies the multicascade influence maximization problem, which explores strategies for launching one information cascade in a social network with multiple existing cascades. With natural extensions to the classic models, we first propose the independent multicascade model where the diffusion process is governed by the socalled activation function. We show that the proposed model is sufficiently flexible as it generalizes most of the existing cascadebased models. We then study the multicascade influence maximization problem under the designed model and provide approximation hardness under common complexity assumptions, namely Exponential Time Hypothesis and NP subseteq DTIMEnpoly log n. Given the hardness results, we build a framework for designing heuristic seed selection algorithms with a testable datadependent approximation ratio. The designed algorithm leverages upper and lower bounds, which reveal the key combinatorial structure behind the multicascade influence maximization problem. The performance of the framework is theoretically analyzed and practically evaluated through extensive simulations. The superiority of the proposed solution is supported by encouraging experimental results, in terms of effectiveness and efficiency.
Rank Aggregation via Heterogeneous Thurstone Preference Models ; We propose the Heterogeneous Thurstone Model HTM for aggregating ranked data, which can take the accuracy levels of different users into account. By allowing different noise distributions, the proposed HTM model maintains the generality of Thurstone's original framework, and as such, also extends the BradleyTerryLuce BTL model for pairwise comparisons to heterogeneous populations of users. Under this framework, we also propose a rank aggregation algorithm based on alternating gradient descent to estimate the underlying item scores and accuracy levels of different users simultaneously from noisy pairwise comparisons. We theoretically prove that the proposed algorithm converges linearly up to a statistical error which matches that of the stateoftheart method for the singleuser BTL model. We evaluate the proposed HTM model and algorithm on both synthetic and real data, demonstrating that it outperforms existing methods.
The Knowledge Within Methods for DataFree Model Compression ; Recently, an extensive amount of research has been focused on compressing and accelerating Deep Neural Networks DNN. So far, high compression rate algorithms require part of the training dataset for a low precision calibration, or a finetuning process. However, this requirement is unacceptable when the data is unavailable or contains sensitive information, as in medical and biometric usecases. We present three methods for generating synthetic samples from trained models. Then, we demonstrate how these samples can be used to calibrate and finetune quantized models without using any real data in the process. Our best performing method has a negligible accuracy degradation compared to the original training set. This method, which leverages intrinsic batch normalization layers' statistics of the trained model, can be used to evaluate data similarity. Our approach opens a path towards genuine datafree model compression, alleviating the need for training data during model deployment.
Galileon scalar electrodynamics ; We construct a consistent model of Galileon scalar electrodynamics. The model satisfies three essential requirements 1 The action contains higherorder derivative terms, and obey the Galilean symmetry, 2 Equations of motion also satisfy Galilean symmetry and contain only up to secondorder derivative terms in the matter fields and, hence do not suffer from instability, and 3 local U1 gauge invariance is preserved. We show that the nonminimal coupling terms in our model are different from that of the real scalar Galileon models; however, they match with the Galileon real scalar field action. We show that the model can lead to an accelerated expansion in the early Universe. We discuss the implications of the model for cosmological inflation.
Modeling and Prediction of Iran's Steel Consumption Based on Economic Activity Using Support Vector Machines ; The steel industry has great impacts on the economy and the environment of both developed and underdeveloped countries. The importance of this industry and these impacts have led many researchers to investigate the relationship between a country's steel consumption and its economic activity resulting in the socalled intensity of use model. This paper investigates the validity of the intensity of use model for the case of Iran's steel consumption and extends this hypothesis by using the indexes of economic activity to model the steel consumption. We use the proposed model to train support vector machines and predict the future values for Iran's steel consumption. The paper provides detailed correlation tests for the factors used in the model to check for their relationships with the steel consumption. The results indicate that Iran's steel consumption is strongly correlated with its economic activity following the same pattern as the economy has been in the last four decades.
The fundamental thermodynamic bounds on finite models ; The minimum heat cost of computation is subject to bounds arising from Landauer's principle. Here, I derive bounds on finite modelling the production or anticipation of patterns timeseries data by devices that model the pattern in a piecewise manner and are equipped with a finite amount of memory. When producing a pattern, I show that the minimum dissipation is proportional to the information in the model's memory about the pattern's history that never manifests in the device's future behaviour and must be expunged from memory. I provide a general construction of model that allow this dissipation to be reduced to zero. By also considering devices that consume, or effect arbitrary changes on a pattern, I discuss how these finite models can form an information reservoir framework consistent with the second law of thermodynamics.
Audioattention discriminative language model for ASR rescoring ; Endtoend approaches for automatic speech recognition ASR benefit from directly modeling the probability of the word sequence given the input audio stream in a single neural network. However, compared to conventional ASR systems, these models typically require more data to achieve comparable results. Wellknown model adaptation techniques, to account for domain and style adaptation, are not easily applicable to endtoend systems. Conventional HMMbased systems, on the other hand, have been optimized for various production environments and use cases. In this work, we propose to combine the benefits of endtoend approaches with a conventional system using an attentionbased discriminative language model that learns to rescore the output of a firstpass ASR system. We show that learning to rescore a list of potential ASR outputs is much simpler than learning to generate the hypothesis. The proposed model results in 8 improvement in word error rate even when the amount of training data is a fraction of data used for training the firstpass system.
GAMBIT and its Application in the Search for Physics Beyond the Standard Model ; The Global and Modular BeyondStandard Model Inference Tool GAMBIT is an open source software framework for performing global statistical fits of particle physics models, using a wide range of particle and astroparticle data. In this review, we describe the design principles of the package, the statistical and sampling frameworks, the experimental data included, and the first two years of physics results generated with it. This includes supersymmetric models, axion theories, Higgs portal dark matter scenarios and an extension of the Standard Model to include righthanded neutrinos. Owing to the broad spectrum of physics scenarios tackled by the GAMBIT community, this also serves as a convenient, selfcontained review of the current experimental and theoretical status of the most popular models of dark matter.
Identity Preserve Transform Understand What Activity Classification Models Have Learnt ; Activity classification has observed great success recently. The performance on small dataset is almost saturated and people are moving towards larger datasets. What leads to the performance gain on the model and what the model has learnt In this paper we propose identity preserve transform IPT to study this problem. IPT manipulates the nuisance factors background, viewpoint, etc. of the data while keeping those factors related to the task human motion unchanged. To our surprise, we found popular models are using highly correlated information background, object to achieve high classification accuracy, rather than using the essential information human motion. This can explain why an activity classification model usually fails to generalize to datasets it is not trained on. We implement IPT in two forms, i.e. imagespace transform and 3D transform, using synthetic images. The tool will be made opensource to help study model and dataset design.
An Efficient Augmented Lagrangian Method for Support Vector Machine ; Support vector machine SVM has proved to be a successful approach for machine learning. Two typical SVM models are the L1loss model for support vector classification SVC and epsilonL1loss model for support vector regression SVR. Due to the nonsmoothness of the L1loss function in the two models, most of the traditional approaches focus on solving the dual problem. In this paper, we propose an augmented Lagrangian method for the L1loss model, which is designed to solve the primal problem. By tackling the nonsmooth term in the model with MoreauYosida regularization and the proximal operator, the subproblem in augmented Lagrangian method reduces to a nonsmooth linear system, which can be solved via the quadratically convergent semismooth Newton's method. Moreover, the high computational cost in semismooth Newton's method can be significantly reduced by exploring the sparse structure in the generalized Jacobian. Numerical results on various datasets in LIBLINEAR show that the proposed method is competitive with the most popular solvers in both speed and accuracy.
Application of autonomous pathfinding system to kinematics and dynamics problems by implementing network constraints ; A neural network system in an animal brain contains many modules and generates adaptive behavior by integrating the outputs from the modules. The mathematical modeling of such large systems to elucidate the mechanism of rapidly finding solutions is vital to develop control methods for robotics and distributed computation algorithms. In this article, we present a network model to solve kinematics and dynamics problems for robot arm manipulation. This model represents the solution as an attractor in the phase space and also finds a new solution automatically when perturbations such as variations in the end position of the arm or obstacles occur. In the proposed model, the physical constraints, target position, and the existence of obstacles are represented by network connections. Therefore, the theoretical framework of the model remains almost the same when the number of constraints increases. In addition, as the model is regarded as a distributed system, it can be applied toward the development of parallel computation algorithms.
DeepHashing using TripletLoss ; Hashing is one of the most efficient techniques for approximate nearest neighbour search for large scale image retrieval. Most of the techniques are based on handengineered features and do not give optimal results all the time. Deep Convolutional Neural Networks have proven to generate very effective representation of images that are used for various computer vision tasks and inspired by this there have been several Deep Hashing models like Wang et al. 2016 have been proposed. These models train on the triplet loss function which can be used to train models with superior representation capabilities. Taking the latest advancements in training using the triplet loss I propose new techniques that help the Deep Hashing models train more faster and efficiently. Experiment result1show that using the more efficient techniques for training on the triplet loss, we have obtained a 5percent improvement in our model compared to the original work of Wang et al.2016. Using a larger model and more training data we can drastically improve the performance using the techniques we propose
Improving Abstractive Text Summarization with History Aggregation ; Recent neural sequence to sequence models have provided feasible solutions for abstractive summarization. However, such models are still hard to tackle long text dependency in the summarization task. A highquality summarization system usually depends on strong encoder which can refine important information from long input texts so that the decoder can generate salient summaries from the encoder's memory. In this paper, we propose an aggregation mechanism based on the Transformer model to address the challenge of long text representation. Our model can review history information to make encoder hold more memory capacity. Empirically, we apply our aggregation mechanism to the Transformer model and experiment on CNNDailyMail dataset to achieve higher quality summaries compared to several strong baseline models on the ROUGE metrics.
Universal Inference ; We propose a general method for constructing hypothesis tests and confidence sets that have finite sample guarantees without regularity conditions. We refer to such procedures as universal. The method is very simple and is based on a modified version of the usual likelihood ratio statistic, that we call the split likelihood ratio test split LRT. The method is especially appealing for irregular statistical models. Canonical examples include mixture models and models that arise in shapeconstrained inference. Constructing tests and confidence sets for such models is notoriously difficult. Typical inference methods, like the likelihood ratio test, are not useful in these cases because they have intractable limiting distributions. In contrast, the method we suggest works for any parametric model and also for some nonparametric models. The split LRT can also be used with profile likelihoods to deal with nuisance parameters, and it can also be run sequentially to yield anytimevalid pvalues and confidence sequences.
A Multicascaded Model with Data Augmentation for Enhanced Paraphrase Detection in Short Texts ; Paraphrase detection is an important task in text analytics with numerous applications such as plagiarism detection, duplicate question identification, and enhanced customer support helpdesks. Deep models have been proposed for representing and classifying paraphrases. These models, however, require large quantities of humanlabeled data, which is expensive to obtain. In this work, we present a data augmentation strategy and a multicascaded model for improved paraphrase detection in short texts. Our data augmentation strategy considers the notions of paraphrases and nonparaphrases as binary relations over the set of texts. Subsequently, it uses graph theoretic concepts to efficiently generate additional paraphrase and nonparaphrase pairs in a sound manner. Our multicascaded model employs three supervised feature learners cascades based on CNN and LSTM networks with and without softattention. The learned features, together with handcrafted linguistic features, are then forwarded to a discriminator network for final classification. Our model is both wide and deep and provides greater robustness across clean and noisy short texts. We evaluate our approach on three benchmark datasets and show that it produces a comparable or stateoftheart performance on all three.
Machine Learning from a Continuous Viewpoint ; We present a continuous formulation of machine learning, as a problem in the calculus of variations and differentialintegral equations, in the spirit of classical numerical analysis. We demonstrate that conventional machine learning models and algorithms, such as the random feature model, the twolayer neural network model and the residual neural network model, can all be recovered in a scaled form as particular discretizations of different continuous formulations. We also present examples of new models, such as the flowbased random feature model, and new algorithms, such as the smoothed particle method and spectral method, that arise naturally from this continuous formulation. We discuss how the issues of generalization error and implicit regularization can be studied under this framework.
An Empirical Study of Factors Affecting LanguageIndependent Models ; Scaling existing applications and solutions to multiple human languages has traditionally proven to be difficult, mainly due to the languagedependent nature of preprocessing and feature engineering techniques employed in traditional approaches. In this work, we empirically investigate the factors affecting languageindependent models built with multilingual representations, including task type, language set and data resource. On two most representative NLP tasks sentence classification and sequence labeling, we show that languageindependent models can be comparable to or even outperforms the models trained using monolingual data, and they are generally more effective on sentence classification. We experiment languageindependent models with many different languages and show that they are more suitable for typologically similar languages. We also explore the effects of different data sizes when training and testing languageindependent models, and demonstrate that they are not only suitable for highresource languages, but also very effective in lowresource languages.
On the Difference Between the Information Bottleneck and the Deep Information Bottleneck ; Combining the Information Bottleneck model with deep learning by replacing mutual information terms with deep neural nets has proved successful in areas ranging from generative modelling to interpreting deep neural networks. In this paper, we revisit the Deep Variational Information Bottleneck and the assumptions needed for its derivation. The two assumed properties of the data X, Y and their latent representation T take the form of two Markov chains TXY and XTY. Requiring both to hold during the optimisation process can be limiting for the set of potential joint distributions PX,Y,T. We therefore show how to circumvent this limitation by optimising a lower bound for IT;Y for which only the latter Markov chain has to be satisfied. The actual mutual information consists of the lower bound which is optimised in DVIB and cognate models in practice and of two terms measuring how much the former requirement TXY is violated. Finally, we propose to interpret the family of information bottleneck models as directed graphical models and show that in this framework the original and deep information bottlenecks are special cases of a fundamental IB model.
Clustering based Privacy Preserving of Big Data using Fuzzification and Anonymization Operation ; Big Data is used by data miner for analysis purpose which may contain sensitive information. During the procedures it raises certain privacy challenges for researchers. The existing privacy preserving methods use different algorithms that results into limitation of data reconstruction while securing the sensitive data. This paper presents a clustering based privacy preservation probabilistic model of big data to secure sensitive information..model to attain minimum perturbation and maximum privacy. In our model, sensitive information is secured after identifying the sensitive data from data clusters to modify or generalize it.The resulting dataset is analysed to calculate the accuracy level of our model in terms of hidden data, lossed data as result of reconstruction. Extensive experiements are carried out in order to demonstrate the results of our proposed model. Clustering based Privacy preservation of individual data in big data with minimum perturbation and successful reconstruction highlights the significance of our model in addition to the use of standard performance evaluation measures.
Field theoretic interpretations of interacting dark energy scenarios and recent observations ; Cosmological models describing the nongravitational interaction between dark matter and dark energy are based on some phenomenological choices of the interaction rates between dark matter and dark energy. There is no such guiding rule to select such rates of interaction. it In the present work we show that various phenomenological models of the interaction rates might have a strong field theoretical ground. We explicitly derive several well known interaction functions between dark matter and dark energy under some special conditions and finally constrain them using the latest cosmic microwave background observations from final Planck legacy release together with baryon acoustic oscillations distance measurements. Our analyses report that one of the interacting functions is able to alleviate the H0 tension. We also perform a Bayesian evidence analyses for all the models with reference to the LambdaCDM model. From the Bayesian evidence analyses, although the reference scenario is preferred over the interacting scenarios, however, we found that two interacting models are close to the reference LambdaCDM model.
Relational StateSpace Model for Stochastic MultiObject Systems ; Realworld dynamical systems often consist of multiple stochastic subsystems that interact with each other. Modeling and forecasting the behavior of such dynamics are generally not easy, due to the inherent hardness in understanding the complicated interactions and evolutions of their constituents. This paper introduces the relational statespace model RSSM, a sequential hierarchical latent variable model that makes use of graph neural networks GNNs to simulate the joint state transitions of multiple correlated objects. By letting GNNs cooperate with SSM, RSSM provides a flexible way to incorporate relational information into the modeling of multiobject dynamics. We further suggest augmenting the model with normalizing flows instantiated for vertexindexed random variables and propose two auxiliary contrastive objectives to facilitate the learning. The utility of RSSM is empirically evaluated on synthetic and real timeseries datasets.
Symblicit Exploration and Elimination for Probabilistic Model Checking ; Binary decision diagrams can compactly represent vast sets of states, mitigating the state space explosion problem in model checking. Probabilistic systems, however, require multiterminal diagrams storing rational numbers. They are inefficient for models with many distinct probabilities and for iterative numeric algorithms like value iteration. In this paper, we present a new symblicit approach to checking Markov chains and related probabilistic models We first generate a decision diagram that symbolically collects all reachable states and their predecessors. We then concretise states onebyone into an explicit partial state space representation. Whenever all predecessors of a state have been concretised, we eliminate it from the explicit state space in a way that preserves all relevant probabilities and rewards. We thus keep few explicit states in memory at any time. Experiments show that very large models can be modelchecked in this way with very low memory consumption.
A universal framework for tchannel dark matter models ; We present the DMSimpt model implementation in FeynRules, which aims to offer a unique general framework allowing for all simulations relevant for simplified tchannel dark matter models at colliders and for the complementary cosmology calculations. We describe how to match nexttoleadingorder QCD fixedorder calculations with parton showers to derive robust bounds and predictions in the context of LHC dark matter searches, and moreover validate two model restrictions relevant for Dirac and Majorana fermionic dark matter respectively to exemplify how to evaluate dark matter observables to constrain the model parameter space. More importantly, we emphasise how to achieve these results by using a combination of publicly available automated tools, and discuss how dark matter predictions are sensitive to the model file and software setup. All files, together with illustrative Mathematica notebooks, are available from the URL httpfeynrules.irmp.ucl.ac.bewikiDMsimpt.
Fermionic CFTs and classifying algebras ; We study fermionic conformal field theories on surfaces with spin structure in the presence of boundaries, defects, and interfaces. We obtain the relevant crossing relations, taking particular care with parity signs and signs arising from the change of spin structure in different limits. We define fermionic classifying algebras for boundaries, defects, and interfaces, which allow one to read off the elementary boundary conditions, etc. As examples, we define fermionic extensions of Virasoro minimal models and give explicit solutions for the spectrum and bulk structure constants. We show how the A and Dtype fermionic Virasoro minimal models are related by a parityshift operation which we define in general. We study the boundaries, defects, and interfaces in several examples, in particular in the fermionic Ising model, i.e. the free fermion, in the fermionic tricritical Ising model, i.e. the first unitary N1 superconformal minimal model, and in the supersymmetric LeeYang model, of which there are two distinct versions that are related by parityshift.
Improved propagation models for lte path loss prediction in urban suburban Ghana ; To maximize the benefits of LTE cellular networks, careful and proper planning is needed. This requires the use of accurate propagation models to quantify the path loss required for base station deployment. Deployed LTE networks in Ghana can barely meet the desired 100Mbps throughput leading to customer dissatisfaction. Network operators rely on transmission planning tools designed for generalized environments that come with already embedded propagation models suited to other environments. A challenge therefore to Ghanaian transmission Network planners will be choosing an accurate and precise propagation model that best suits the Ghanaian environment. Given this, extensive LTE path loss measurements at 800MHz and 2600MHz were taken in selected urban and suburban environments in Ghana and compared with 6 commonly used propagation models. Improved versions of the Ericson, SUI, and ECC33 developed in this study predict more precisely the path loss in Ghanaian environments compared with commonly used propagation models.
SpatioTemporal RankedAttention Networks for Video Captioning ; Generating video descriptions automatically is a challenging task that involves a complex interplay between spatiotemporal visual features and language models. Given that videos consist of spatial framelevel features and their temporal evolutions, an effective captioning model should be able to attend to these different cues selectively. To this end, we propose a SpatioTemporal and TemporoSpatial STaTS attention model which, conditioned on the language state, hierarchically combines spatial and temporal attention to videos in two different orders i a spatiotemporal ST submodel, which first attends to regions that have temporal evolution, then temporally pools the features from these regions; and ii a temporospatial TS submodel, which first decides a single frame to attend to, then applies spatial attention within that frame. We propose a novel LSTMbased temporal ranking function, which we call ranked attention, for the ST model to capture action dynamics. Our entire framework is trained endtoend. We provide experiments on two benchmark datasets MSVD and MSRVTT. Our results demonstrate the synergy between the ST and TS modules, outperforming recent stateoftheart methods.
FASiM A Framework for Automatic Formal Analysis of Simulink Models of Linear Analog Circuits ; Simulink is a graphical environment that is widely adapted for the modeling and the Laplace transform based analysis of linear analog circuits used in signal processing architectures. However, due to the involvement of the numerical algorithms of MATLAB in the analysis process, the analysis results cannot be termed as complete and accurate. Higherorderlogic theorem proving is a formal verification method that has been recently proposed to overcome these limitations for the modeling and the Laplace transform based analysis of linear analog circuits. However, the formal modeling of a system is not a straightforward task due to the lack of formal methods background for engineers working in the industry. Moreover, due to the undecidable nature of higherorder logic, the analysis generally requires a significant amount of user guidance in the manual proof process. In order to facilitate industrial engineers to formally analyze the linear analog circuits based on the Laplace transform, we propose a framework, FASiM, which allows automatically conducting the formal analysis of the Simulink models of linear analog circuits using the HOL Light theorem prover. For illustration, we use FASiM to formally analyze Simulink models of some commonly used linear analog filters, such as Sallenkey filters.
Single headed attention based sequencetosequence model for stateoftheart results on Switchboard ; It is generally believed that direct sequencetosequence seq2seq speech recognition models are competitive with hybrid models only when a large amount of data, at least a thousand hours, is available for training. In this paper, we show that stateoftheart recognition performance can be achieved on the Switchboard300 database using a single headed attention, LSTM based model. Using a crossutterance language model, our singlepass speaker independent system reaches 6.4 and 12.5 word error rate WER on the Switchboard and CallHome subsets of Hub5'00, without a pronunciation lexicon. While careful regularization and data augmentation are crucial in achieving this level of performance, experiments on Switchboard2000 show that nothing is more useful than more data. Overall, the combination of various regularizations and a simple but fairly large model results in a new state of the art, 4.7 and 7.8 WER on the Switchboard and CallHome sets, using SWB2000 without any external data resources.
Accelerated Expansion of the Universe in the Model with NonUniform Pressure ; We present the particular case of the Stephani solution for shearfree perfect fluid with uniform energy density and nonuniform pressure. Such models appeared as possible alternative to the consideration of the exotic forms of matter like dark energy that would cause the acceleration of the universe expansion. These models are characterised by the spatial curvature depending on time. We analyze the properties of the cosmological model obtained on the basis of exact solution of the Stephani class and adopt it to the recent observational data. The spatial geometry of the model is investigated. We show that despite possible singularities, the model can describe the current stage of the universe evolution.
Kingman's model with random mutation probabilities convergence and condensation II ; Kingman's model describes the evolution of a onelocus haploid population of infinite size and discrete generations under the competition of selection and mutation. A random generalisation has been made in a previous paper which assumes all mutation probabilities to be i.i.d.. The weak convergence of fitness distributions to a globally stable equilibrium for any initial distribution was proved. A condensation occurs if almost surely a positive proportion of the population travels to and condensates on the largest fitness value due to the dominance of selection over mutation. A criterion of condensation was given which relies on the equilibrium whose explicit expression is however unknown. This paper tackles these problems based on the discovery of a matrix representation of the random model. An explicit expression of the equilibrium is obtained and the key quantity in the condensation criterion can be estimated. Moreover we examine how the design of randomness in Kingman's model affects the fitness level of the equilibrium by comparisons between different models. The discovered facts are conjectured to hold in other more sophisticated models.
Improvement Studies of an Effective Interaction for NZ sdshell Nuclei by Neural Networks ; The nuclear shell model is one of the successful models in theoretical understanding of nuclear structure. If a convenient effective interaction can be found between nucleons, various observables such as energies of nuclear states are accurately predicted by this model. The basic requirements for the shell model calculations are a set of single particle energies and twobody interaction matrix elements TBME which construct the residual interaction between nucleons. This latter could be parameterized in different ways. In this study, we have used a different approach to improve existing USD type Hamiltonians for the shell model calculations of NZ nuclei in the A1640 region. After obtaining the SDNN new effective interaction, shell model calculations have been performed for all NZ nuclei in sd shell. In which, 16O doubly magic nucleus has been assumed as an inert core and active particles are distributed in the d52, s12 and d32 single particle orbits. The rms deviations from experimental energy values are lower for the newly generated effective interaction than those obtained using the original one for the studied nuclei.
Deformationaware Unpaired Image Translation for Pose Estimation on Laboratory Animals ; Our goal is to capture the pose of neuroscience model organisms, without using any manual supervision, to be able to study how neural circuits orchestrate behaviour. Human pose estimation attains remarkable accuracy when trained on real or simulated datasets consisting of millions of frames. However, for many applications simulated models are unrealistic and real training datasets with comprehensive annotations do not exist. We address this problem with a new sim2real domain transfer method. Our key contribution is the explicit and independent modeling of appearance, shape and poses in an unpaired image translation framework. Our model lets us train a pose estimator on the target domain by transferring readily available body keypoint locations from the source domain to generated target images. We compare our approach with existing domain transfer methods and demonstrate improved pose estimation accuracy on Drosophila melanogaster fruit fly, Caenorhabditis elegans worm and Danio rerio zebrafish, without requiring any manual annotation on the target domain and despite using simplistic offtheshelf animal characters for simulation, or simple geometric shapes as models. Our new datasets, code, and trained models will be published to support future neuroscientific studies.
A comparative study of 0 decay in symmetric and asymmetric leftright model ; We study the new physics contributions to neutrinoless double beta decay 0nubetabeta in a TeV scale leftright model with spontaneous Dparity breaking mechanism where the values of the SU2L and SU2R gauge couplings, gL and gR are unequal. Neutrino mass is generated in the model via gauge extended inverse seesaw mechanism. We embed the model in a nonsupersymmetric SO10 GUT with a purpose of quantifying the results due to the condition gL neq gR. We compare the predicted numerical values of half life of 0nubetabeta decay, effective Majorana mass parameter and other lepton number violating parameters for three different cases; i for manifest leftright symmetric model gL gR, ii for leftright model with spontaneous D parity breaking gL neq gR, iii for PatiSalam symmetry with D parity breaking gL neq gR. We show how different contributions to 0nubetabeta decay are suppressed or enhanced depending upon the values of the ratio fracgRgL that are predicted from successful gauge coupling unification.