text
stringlengths
62
2.94k
Predicting the Spin Seebeck Voltage in Spinpolarized Materials A Quantum Mechanical Transport Model Approach ; The spin Seebeck effect has recently been demonstrated as a viable method of direct energy conversion that has potential to outperform energy conversion from the conventional Seebeck effect. In this study, a computational transport model is developed and validated that predicts the spin Seebeck voltage in spinpolarized materials using material parameter obtain from first principle ground state density functional calculations. The transport model developed is based on a 1D effective mass description coupled with a microscopic inverse spin Hall relationship. The model can predict both the spin current and voltage generated in a nonmagnetic material placed on top of a ferromagnetic material in a transverse spin Seebeck configuration. The model is validated and verified with available experimental data of LaYIG. Future applications of this model include the highthroughput exploration of new spinbased thermoelectric materials.
BitVector Model Counting using Statistical Estimation ; Approximate model counting for bitvector SMT formulas generalizing SAT has many applications such as probabilistic inference and quantitative informationflow security, but it is computationally difficult. Adding random parity constraints XOR streamlining and then checking satisfiability is an effective approximation technique, but it requires a prior hypothesis about the model count to produce useful results. We propose an approach inspired by statistical estimation to continually refine a probabilistic estimate of the model count for a formula, so that each XORstreamlined query yields as much information as possible. We implement this approach, with an approximate probability model, as a wrapper around an offtheshelf SMT solver or SAT solver. Experimental results show that the implementation is faster than the most similar previous approaches which used simpler refinement strategies. The technique also lets us model count formulas over floatingpoint constraints, which we demonstrate with an application to a vulnerability in differential privacy mechanisms.
Graphic Enumerations and Discrete Painleve Equations via Random Matrix Models ; We revisit the enumeration problems of random discrete surfaces RDS based on solutions of the discrete equations derived from the matrix models. For RDS made of squares, the recursive coefficients of orthogonal polynomials associated with the quartic matrix model satisfy the discrete type I Painlev'e equation. Through the use of generating function techniques, we show that the planar contribution to the free energy is controlled by the Catalan numbers. We also develop a new systematic scheme of calculating highergenus contributions to the topological expansion of the free energy of matrix models. It is important that our exact solutions are valid for finiteN matrix models and no continuous limits are taken within our approach. To show the advantages of our approach, we provide new results of the topological expansion of the free energy for the finiteN cubic matrix model.
Unstable modes in projectionbased reducedorder models How many can there be, and what do they tell you ; Projection methods provide an appealing way to construct reducedorder models of largescale linear dynamical systems they are intuitively motivated and fairly easy to compute. Unfortunately, the resulting reduced models need not inherit the stability of the original system. How many unstable modes can these reduced models have This note investigates this question, using theory originally motivated by iterative methods for linear algebraic systems and eigenvalue problems, and illustrating the theory with a number of small examples. From these results follow rigorous upper bounds on the number of unstable modes in reduced models generated via orthogonal projection, for both continuous and discretetime systems. Can anything be learned from the unstable modes in reducedorder models Several examples illustrate how such instability can helpfully signal transient growth in the original system.
The risk model with stochastic premiums, dependence and a threshold dividend strategy ; The paper deals with a generalization of the risk model with stochastic premiums where dependence structures between claim sizes and interclaim times as well as premium sizes and interpremium times are modeled by FarlieGumbelMorgenstern copulas. In addition, dividends are paid to its shareholders according to a threshold dividend strategy. We derive integral and integrodifferential equations for the GerberShiu function and the expected discounted dividend payments until ruin. Next, we concentrate on the detailed investigation of the model in the case of exponentially distributed claim and premium sizes. In particular, we find explicit formulas for the ruin probability in the model without either dividend payments or dependence as well as for the expected discounted dividend payments in the model without dependence. Finally, numerical illustrations are presented.
Graphical Structure of Hadronization and Factorization in Hard Collisions ; Models of hadronization of hard jets in QCD are often presented in terms of Feynmangraph structures that can be thought of as effective field theory approximations to dynamical nonperturbative physics in QCD. Such models can be formulated as a kind of multiperipheral model. We obtain general constraints on such models in order for them to be selfconsistent, and we relate the constraints to the spacetime structure of hadronization. We show that appropriate models can be considered as implementing stringlike hadronization. When the models are put in a multiperipheral form, the effective vertices andor lines must be momentum nonconserving they take 4momentum from the external stringlike field.
Computer modeling of properties of KaluzaKlein particles and their searches at the LHC ; The Standard Model problems lead to the new theories of extra dimensions RandallSundrum model, ArkaniHamedDimopoulosDvali model and TeV1 model. In the framework of these models with the help of computer program Pythia8.2 were calculated the production cross sections for KaluzaKlein particles at various energies at the LHC. The generation of monojet events from scalar graviton emission was considered for number of extra dimensions, n2, 4, 6, for the energy at the LHC 14 TeV. Also are studied the graviton production processes through the gluongluon, quarkgluon and quarkquark fusion processes and found some periodicity in the behavior of the graviton mass spectrum. Within RandallSundrum scenario were calculated sigmatimes Br for production process of massive graviton, gg rightarrow G, and the most probable processes of graviton decay at 13 TeV, 14 TeV and 100 TeV.
A Stochastic Singular Vector Based MIMO Channel Model for MAC Layer Tracking ; A novel stochastic technique is presented to directly model singular vectors and singular values of a multiple input multiple output channel. Thus the component smodeled directly in the eigen domain can be adapted to exhibit realistic physical domain behavior when assembled. The model exploites natural paths of eigenmodes, such that a simple Doppler filter generator process can be used. Furthermore it is possible to directly manipulate the singular vector dynamics in a way that an unrealistic stress channel can be modeled in the eigen domain. This is particularly useful for testing the eigenmode channel tracking ability internal to a communication device such as a modem, where impairments in tracking will cause interference between eigenmodes. The model can also facilitate mode tracking testing as it directly produces tracked ungtangled eigenmodes, providing the narrowest possible singular vector Doppler spectra and consequently lowest required update rates of each eigenmode. The singular vector based model targets testing of the eigen domain functionality of MIMO modemsdevices, an apparatus focus, without the need for including the decomposition stages.
Adiabatic expansion of polytropic universe with varying cosmological constant Models tested with observational data ; We use two large collections of observational data of supernovae type Ia SNe Ia to investigate the polytropic Universe including the situation with a varying cosmological constant; details of our new derivations are presented. We examine the fitness of our new models of a polytropic Universe with two sets of SNe Ia data to test if these are better descriptions than the current standard model of cosmology. Beginning with the established relationships for polytropic matter we derive new equations describing the influence of polytropic matter on the expanding Universe including the situation with a varying cosmological constant. When the models derived here are tested with large sets of supernovae type I a data we find a significant influence of polytropic matter on the state of our expanding Universe. We find that one of our models with a varying Lambda describes the SNe Ia data significantly better than the, LambdaCDM, standard model.
Discrete Weibull generalised additive model an application to count fertility data ; Fertility plans, measured by the number of planned children, have been found to be affected by education and family background via complex tail dependencies. This challenge was previously met with the use of nonparametric jittering approaches. This paper shows how a novel generalized additive model based on a discrete Weibull distribution provides partial effects of the covariates on fertility plans which are comparable to jittering, without the inherent drawback of crossing conditional quantiles. The model has some additional desirable features both over and underdispersed data can be modelled by this distribution, the conditional quantiles have a simple analytic form and the likelihood is the same of that of a continuous Weibull distribution with intervalcensored data. The latter means that efficient implementations are already available, in the R package gamlss, for a range of models and inferential procedures, and at a fraction of the time compared to the jittering and COMPoisson approaches, showing potential for the wide applicability of this approach to the modelling of count data.
Automatised ILCBounds on Dark Matter Models with CheckMATE ; The public collider phenomenology computing tool CheckMATE Check Models at Terascale Energies was originally designed to allow theorists to quickly test their favourite BSM models against various existing LHC analyses performed by ATLAS and CMS. It offers an automatised chain of Monte Carlo event generation, detector simulation, event analysis and statistical evaluation so that it can automatically determine whether a given parameter point of a BSM model is excluded or not. Currently, it contains more than 50 individual ATLAS or CMS analyses whose several hundred signal regions target various final states as they typically appear in theories beyond the Standard Model. In this study, we extend this functionality to allow sensitivity studies for the International Linear Collider. As an example, we implemente a dark matter monophoton search and use it to analyse three benchmark scenarios with different assumptions about the interaction between dark matter and Standard Model particles. We determine the ILC sensitivity expected for a sqrts 500 GeV, L 500 fb1and compare the results for the cases of completely unpolarised beams and for individual lepton polarisation settings.
Hopf bifurcation in a conceptual climate model with icealbedo and precipitationtemperature feedbacks ; In this paper we analyse a dynamical system based on the socalled KCG Kall'en, Crafoord, Ghil conceptual climate model. This model describes an evolution of the globally averaged temperature and the average extent of the ice sheets. In the nondimensional form the model is prone to several simplifications facilitating the subsequent analysis. We consider the limiting case of the stationary snow line for which the phase plane can be completely analysed and the type of each stationary point can be determined. One of them can exhibit the Hopf bifurcation for which occurrence we find the sufficient conditions. Those, in turn, have a straightforward physical meaning and indicate that the model predicts internal oscillations of the climate. Using the typical values of model parameters we conclude that the obtained results are in the same ballpark as the conditions on our planet during the quaternary ice ages. Our analysis is a rigorous justification of a generalization of some previous results by KCG and other authors.
Finite element model updating for structural applications ; A novel method for performing model updating on finite element models is presented. The approach is particularly tailored to modal analyses of buildings, by which the lowest frequencies, obtained by using sensors and system identification approaches, need to be matched to the numerical ones predicted by the model. This is done by optimizing some unknown material parameters such as mass density and Young's modulus of the materials andor the boundary conditions, which are often known only approximately. In particular, this is the case when considering historical buildings. The straightforward application of a generalpurpose optimizer can be impractical, given the large size of the model involved. In the paper, we show that, by slightly modifying the projection scheme used to compute the eigenvalues at the lowest end of the spectrum one can obtain local parametric reduced order models that, embedded in a trustregion scheme, form the basis for a reliable and efficient specialized algorithm. We describe an optimization strategy based on this approach, and we provide numerical experiments that confirm its effectiveness and accuracy.
EMME a formal tool for ECMAScript Memory Model Evaluation ; Nearly all webbased interfaces are written in JavaScript. Given its prevalence, the support for high performance JavaScript code is crucial. The ECMA Technical Committee 39 TC39 has recently extended the ECMAScript language i.e., JavaScript to support shared memory accesses between different threads. The extension is given in terms of a natural language memory model specification. In this paper we describe a formal approach for validating both the memory model and its implementations in various JavaScript engines. We first introduce a formal version of the memory model and report results on checking the model for consistency and other properties. We then introduce our tool, EMME, built on top of the Alloy analyzer, which leverages the model to generate all possible valid executions of a given JavaScript program. Finally, we report results using EMME together with small test programs to analyze industrial JavaScript engines. We show that EMME can find bugs as well as missed opportunities for optimization.
Model compression for faster structural separation of macromolecules captured by Cellular Electron CryoTomography ; Electron CryoTomography ECT enables 3D visualization of macromolecule structure inside single cells. Macromolecule classification approaches based on convolutional neural networks CNN were developed to separate millions of macromolecules captured from ECT systematically. However, given the fast accumulation of ECT data, it will soon become necessary to use CNN models to efficiently and accurately separate substantially more macromolecules at the prediction stage, which requires additional computational costs. To speed up the prediction, we compress classification models into compact neural networks with little in accuracy for deployment. Specifically, we propose to perform model compression through knowledge distillation. Firstly, a complex teacher network is trained to generate soft labels with better classification feasibility followed by training of customized student networks with simple architectures using the soft label to compress model complexity. Our tests demonstrate that our compressed models significantly reduce the number of parameters and time cost while maintaining similar classification accuracy.
Analyzing Uncertainty in Neural Machine Translation ; Machine translation is a popular test bed for research in neural sequencetosequence models but despite much recent research, there is still a lack of understanding of these models. Practitioners report performance degradation with large beams, the underestimation of rare words and a lack of diversity in the final translations. Our study relates some of these issues to the inherent uncertainty of the task, due to the existence of multiple valid translations for a single source sentence, and to the extrinsic uncertainty caused by noisy training data. We propose tools and metrics to assess how uncertainty in the data is captured by the model distribution and how it affects search strategies that generate translations. Our results show that search works remarkably well but that models tend to spread too much probability mass over the hypothesis space. Next, we propose tools to assess model calibration and show how to easily fix some shortcomings of current models. As part of this study, we release multiple human reference translations for two popular benchmarks.
HourlySimilarity Based Solar Forecasting Using MultiModel Machine Learning Blending ; With the increasing penetration of solar power into power systems, forecasting becomes critical in power system operations. In this paper, an hourlysimilarity HS based method is developed for 1hourahead 1HA global horizontal irradiance GHI forecasting. This developed method utilizes diurnal patterns, statistical distinctions between different hours, and hourly similarities in solar data to improve the forecasting accuracy. The HSbased method is built by training multiple twolayer multimodel forecasting framework MMFF models independently with the samehour subsets. The final optimal model is a combination of MMFF models with the bestperformed blending algorithm at every hour. At the forecasting stage, the most suitable model is selected to perform the forecasting subtask of a certain hour. The HSbased method is validated by 1year data with six solar features collected by the National Renewable Energy Laboratory NREL. Results show that the HSbased method outperforms the nonHS allinone method significantly with the same MMFF architecture, wherein the optimal HS based method outperforms the best allinone method by 10.94 and 7.74 based on the normalized mean absolute error and normalized root mean square error, respectively.
Variational Message Passing with Structured Inference Networks ; Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational messagepassing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational autoencoders VAE. Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient naturalgradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and naturalgradient inference for deep structured models, our method simplifies and generalizes existing methods.
Benchmarks for cyberphysical systems A modular model library for building automation systems Extended version ; Building Automation Systems BAS are exemplars of CyberPhysical Systems CPS, incorporating digital control architectures over underlying continuous physical processes. We provide a modular model library for BAS drawn from expertise developed on a real BAS setup. The library allows to build models comprising of either physical quantities or digital control modules. which are composable. The structure, operation, and dynamics of the model can be complex, incorporating i stochasticity, ii nonlinearities, iii numerous continuous variables or discrete states, iv various input and output signals, and v a large number of possible discrete configurations. The modular composition of BAS components can generate useful CPS benchmarks. We display this use by means of three realistic case studies, where corresponding models are built and engaged with different analysis goals. The benchmarks, the model library and data collected from the BAS setup at the University of Oxford, are kept online at httpsgithub.comnatchi92BASBenchmarks.
Holographic meson decays via worldsheet instantons ; We study meson decays using instanton methods in two string models. The first model is the old string model in flat space which combines strings and massive particles and the second is the holographic, SakaiSugimoto model. Using the the old string model, we reproduce the QCD formula for the probability of splitting of the QCD flux tube derived by CasherNeubergerNussinov CNN. In the holographic model we construct a string worldsheet instanton which interpolates between a single and double string configuration, which determines the decay from one to two dual mesonic particles. The resulting probability for meson decay incorporates both the effects of finite meson size as well as backreaction of the produced quarks on the QCD flux tube. In the limit of very large strings the probability for a split reduces to the CNN formula. A byproduct of our analysis is the analysis of the moduli space of a generic double concentric Wilson loop with circles which are separated in the holographic direction of the confining background.
A New Result on the Complexity of Heuristic Estimates for the A Algorithm ; Relaxed models are abstract problem descriptions generated by ignoring constraints that are present in baselevel problems. They play an important role in planning and search algorithms, as it has been shown that the length of an optimal solution to a relaxed model yields a monotone heuristic for an A search of a baselevel problem. Optimal solutions to a relaxed model may be computed algorithmically or by search in a further relaxed model, leading to a search that explores a hierarchy of relaxed models. In this paper, we review the traditional definition of problem relaxation and show that searching in the abstraction hierarchy created by problem relaxation will not reduce the computational effort required to find optimal solutions to the base level problem, unless the relaxed problem found in the hierarchy can be transformed by some optimization e.g., subproblem factoring. Specifically, we prove that any A search of the baselevel using a heuristic h2 will largely dominate an A search of the baselevel using a heuristic h1, if h1 must be computed by an A search of the relaxed model using h2.
Exploring the use of timevarying graphs for modelling transit networks ; The study of the dynamic relationship between topological structure of a transit network and the mobility patterns of transit vehicles on this network is critical towardsdevising smart and timeaware solutions to transit management and recommendation systems. This paper proposes a timevarying graph TVG to model thisrelationship. The effectiveness of this proposed model has been explored by implementing the model in Neo4j graph database using transit feeds generated by bus transit network of the City of Moncton, New Brunswick, Canada. Dynamics in this relationshipalsohave been detected using network metrics such as temporal shortest paths, degree, betweenness and PageRank centralities as well as temporal network diameter and density. Keywords Transit Networks,Mobility Pattern,TimeVarying Graph model, Graph Databaseand Graph Analytics Keywords Transit Networks,Mobility Pattern,TimeVarying Graph model, Graph Database and Graph Analytics
Calculating normal tissue complication probabilities and probabilities of complicationfree tumour control from stochastic models of population dynamics ; We use a stochastic birthdeath model for a population of cells to estimate the normal tissue complication probability NTCP under a particular radiotherapy protocol. We specifically allow for interaction between cells, via a nonlinear logistic growth model. To capture some of the effects of intrinsic noise in the population we develop several approximations of NTCP, using KramersMoyal expansion techniques. These approaches provide an approximation to the first and second moments of a general firstpassage time problem in the limit of large, but finite populations. We use this method to study NTCP in a simple model of normal cells and in a model of normal and damaged cells. We also study a combined model of normal tissue cells and tumour cells. Based on existing methods to calculate tumour control probabilities, and our procedure to approximate NTCP, we estimate the probability of complication free tumour control.
Resting and Traveling Localized States in an Active PhaseFieldCrystal Model ; The conserved SwiftHohenberg equation or PhaseFieldCrystal PFC model provides a simple microscopic description of the thermodynamic transition between fluid and crystalline states. Combining it with elements of the TonerTu theory for selfpropelled particles Menzel and Lowen Phys. Rev. Lett. 110, 055702 2013 obtained a model for crystallization swarm formation in active systems. Here, we study the occurrence of resting and traveling localized states, i.e., crystalline clusters, within the resulting active PFC model. Based on linear stability analyses and numerical continuation of the fully nonlinear states, we present a detailed analysis of the bifurcation structure of periodic and localized, resting and traveling states in a onedimensional active PFC model. This allows us, for instance, to explore how the slanted homoclinic snaking of steady localized states found for the passive PFC model is amended by activity. A particular focus lies on the onset of motion, where we show that it occurs either through a driftpitchfork or a drifttranscritical bifurcation. A corresponding general analytical criterion is derived.
The Importance of Constraint Smoothness for Parameter Estimation in Computational Cognitive Modeling ; Psychiatric neuroscience is increasingly aware of the need to define psychopathology in terms of abnormal neural computation. The central tool in this endeavour is the fitting of computational models to behavioural data. The most prominent example of this procedure is fitting reinforcement learning RL models to decisionmaking data collected from mentally ill and healthy subject populations. These models are generative models of the decisionmaking data themselves, and the parameters we seek to infer can be psychologically and neurobiologically meaningful. Currently, the gold standard approach to this inference procedure involves MonteCarlo sampling, which is robust but computationally intensiverendering additional procedures, such as crossvalidation, impractical. Searching for point estimates of model parameters using optimization procedures remains a popular and interesting option. On a novel testbed simulating parameter estimation from a common RL task, we investigated the effects of smooth vs. boundary constraints on parameter estimation using interior point and deterministic direct search algorithms for optimization. Ultimately, we show that the use of boundary constraints can lead to substantial truncation effects. Our results discourage the use of boundary constraints for these applications.
Efficient Interactive Annotation of Segmentation Datasets with PolygonRNN ; Manually labeling datasets with object masks is extremely time consuming. In this work, we follow the idea of PolygonRNN to produce polygonal annotations of objects interactively using humansintheloop. We introduce several important improvements to the model 1 we design a new CNN encoder architecture, 2 show how to effectively train the model with Reinforcement Learning, and 3 significantly increase the output resolution using a Graph Neural Network, allowing the model to accurately annotate highresolution objects in images. Extensive evaluation on the Cityscapes dataset shows that our model, which we refer to as PolygonRNN, significantly outperforms the original model in both automatic 10 absolute and 16 relative improvement in mean IoU and interactive modes requiring 50 fewer clicks by annotators. We further analyze the crossdomain scenario in which our model is trained on one dataset, and used out of the box on datasets from varying domains. The results show that PolygonRNN exhibits powerful generalization capabilities, achieving significant improvements over existing pixelwise methods. Using simple online finetuning we further achieve a high reduction in annotation time for new datasets, moving a step closer towards an interactive annotation tool to be used in practice.
Weakly nonlinear analysis for carfollowing model with consideration of cooperation and time delays ; In traffic systems, cooperative driving has attracted the researchers attentions. A lot of works attempt to understand the effects of cooperative driving behavior andor time delays on traffic flow dynamics for specific traffic flow model. This paper is a new attempt to investigate analyses of linear stability and weak nonlinear for the general carfollowing model with consideration of cooperation and time delays. We derive linear stability condition and study that how the combinations of cooperation and time delays affect the stability of traffic flow. Burgers equation and Korteweg de Vries KdV equation for carfollowing model considering cooperation and time delays are derived. Their solitary wave solutions and constraint conditions are concluded. We investigate the property of cooperative optimal velocityOV model which estimates the combinations of cooperation and time delays about the evolution of traffic waves using both analytic and numerical methods. The results indicate that delays and cooperation are modeldependent, and cooperative behavior could inhibit the stabilization of traffic flow. Moreover, delays of sensing to relative motion are easy to trigger the traffic waves; delays of sensing to host vehicle are beneficial to relieve the instability effect a certain extent.
Commutingprojector Hamiltonians for chiral topological phases built from parafermions ; We introduce a family of commutingprojector Hamiltonians whose degrees of freedom involve mathbbZ3 parafermion zero modes residing in a parent fractionalquantumHall fluid. The two simplest models in this family emerge from dressing Isingparamagnet and toriccode spin models with parafermions; we study their edge properties, anyonic excitations, and groundstate degeneracy. We show that the first model realizes a symmetryenriched topological phase SET for which mathbbZ2 spinflip symmetry from the Ising paramagnet permutes the anyons. Interestingly, the interface between this SET and the parent quantumHall phase realizes symmetryenforced mathbbZ3 parafermion criticality with no finetuning required. The second model exhibits a nonAbelian phase that is consistent with textSU24 topological order, and can be accessed by gauging the mathbbZ2 symmetry in the SET. Employing LevinWen stringnet models with mathbbZ2graded structure, we generalize this picture to construct a large class of commutingprojector models for mathbbZ2 SETs and nonAbelian topological orders exhibiting the same relation. Our construction provides the first commutingprojectorHamiltonian realization of chiral bosonic nonAbelian topological order.
Liver Lesion Detection from Weaklylabeled Multiphase CT Volumes with a Grouped Single Shot MultiBox Detector ; We present a focal liver lesion detection model leveraged by customdesigned multiphase computed tomography CT volumes, which reflects realworld clinical lesion detection practice using a Single Shot MultiBox Detector SSD. We show that grouped convolutions effectively harness richer information of the multiphase data for the object detection model, while a naive application of SSD suffers from a generalization gap. We trained and evaluated the modified SSD model and recently proposed variants with our CT dataset of 64 subjects by fivefold cross validation. Our model achieved a 53.3 average precision score and ran in under three seconds per volume, outperforming the original model and stateoftheart variants. Results show that the onestage object detection model is a practical solution, which runs in near realtime and can learn an unbiased feature representation from a largevolume realworld detection dataset, which requires less tedious and time consuming construction of the weak phaselevel bounding box labels.
Optimizing Execution of Dynamic GoalDirected Robot Movements with Learning Control ; Highly dynamic tasks that require large accelerations and precise tracking usually rely on accurate models andor high gain feedback. While kinematic optimization allows for efficient representation and online generation of hitting trajectories, learning to track such dynamic movements with inaccurate models remains an open problem. In particular, stability issues surrounding the learning performance, in the iteration domain, can prevent the successful implementation of model based learning approaches. To achieve accurate tracking for such tasks in a stable and efficient way, we propose a new adaptive Iterative Learning Control ILC algorithm that is implemented efficiently using a recursive approach. Moreover, covariance estimates of model matrices are used to exercise caution during learning. We evaluate the performance of the proposed approach in extensive simulations and in our robotic table tennis platform, where we show how the striking performance of two seven degree of freedom anthropomorphic robot arms can be optimized. Our implementation on the table tennis platform compares favorably with highgain PDcontrol, modelfree ILC simple PD feedback type and modelbased ILC without cautious adaptation.
Modelbased Clustering ; Mixture models extend the toolbox of clustering methods available to the data analyst. They allow for an explicit definition of the cluster shapes and structure within a probabilistic framework and exploit estimation and inference techniques available for statistical models in general. In this chapter an introduction to cluster analysis is provided, modelbased clustering is related to standard heuristic clustering methods and an overview on different ways to specify the cluster model is given. Postprocessing methods to determine a suitable clustering, infer cluster distribution characteristics and validate the cluster solution are discussed. The versatility of the modelbased clustering approach is illustrated by giving an overview on the different areas of applications.
Analysis of Probabilistic and Parametric Reduced Order Models ; Stochastic models share many characteristics with generic parametric models. In some ways they can be regarded as a special case. But for stochastic models there is a notion of weak distribution or generalised random variable, and the same arguments can be used to analyse parametric models. Such models in vector spaces are connected to a linear map, and in infinite dimensional spaces are a true gener alisation. Reproducing kernel Hilbert space and affine linear representations in terms of tensor products are directly related to this linear operator. This linear map leads to a generalised correlation operator, and representations are connected with factorisations of the correlation operator. The fitting counterpart in the stochastic domain to make this point of view as simple as possible are algebras of random variables with a distinguished linear functional, the state, which is interpreted as expectation. The connections of factorisations of the generalised correlation to the spectral decomposition, as well as the associated KarhunenLoeve or proper orthogonal decomposition will be sketched. The purpose of this short note is to show the common theoretical background and pull some lose ends together.
Data Efficient Lithography Modeling with Transfer Learning and Active Data Selection ; Lithography simulation is one of the key steps in physical verification, enabled by the substantial optical and resist models. A resist model bridges the aerial image simulation to printed patterns. While the effectiveness of learningbased solutions for resist modeling has been demonstrated, they are considerably datademanding. Meanwhile, a set of manufactured data for a specific lithography configuration is only valid for the training of one single model, indicating low data efficiency. Due to the complexity of the manufacturing process, obtaining enough data for acceptable accuracy becomes very expensive in terms of both time and cost, especially during the evolution of technology generations when the design space is intensively explored. In this work, we propose a new resist modeling framework for contact layers, utilizing existing data from old technology nodes and active selection of data in a target technology node, to reduce the amount of data required from the target lithography configuration. Our framework based on transfer learning and active learning techniques is effective within a competitive range of accuracy, i.e., 310X reduction on the amount of training data with comparable accuracy to the stateoftheart learning approach.
Enhanced Diffusivity in Perturbed Senile Reinforced Random Walk Models ; We consider diffusivity of random walks with transition probabilities depending on the number of consecutive traversals of the last traversed edge, the so called senile reinforced random walk SeRW. In one dimension, the walk is known to be subdiffusive with identity reinforcement function. We perturb the model by introducing a small probability delta of escaping the last traversed edge at each step. The perturbed SeRW model is diffusive for any delta 0 , with enhanced diffusivity gg Odelta2 in the small delta regime. We further study stochastically perturbed SeRW models by having the last edge escape probability of the form delta, xin with xin's being independent random variables. Enhanced diffusivity in such models are logarithmically close to the so called residual diffusivity positive in the zero delta limit, with diffusivity between Oleftfrac1logdelta right and Oleftfrac1loglogdeltaright. Finally, we generalize our results to higher dimensions where the unperturbed model is already diffusive. The enhanced diffusivity can be as much as Olog2delta.
Nonasymptotic control of the MLE for misspecified nonparametric hidden Markov models ; Finite state space hidden Markov models are flexible tools to model phenomena with complex time dependencies any process distribution can be approximated by a hidden Markov model with enough hidden states.We consider the problem of estimating an unknown process distribution using nonparametric hidden Markov models in the misspecified setting, that is when the datagenerating process may not be a hidden Markov model.We show that when the true distribution is exponentially mixing and satisfies a forgetting assumption, the maximum likelihood estimator recovers the best approximation of the true distribution. We prove a finite sample bound on the resulting error and show that it is optimal in the minimax senseup to logarithmic factorswhen the model is well specified.
Improving Simple Models with Confidence Profiles ; In this paper, we propose a new method called ProfWeight for transferring information from a pretrained deep neural network that has a high test accuracy to a simpler interpretable model or a very shallow network of low complexity and a priori low test accuracy. We are motivated by applications in interpretability and model deployment in severely memory constrained environments like sensors. Our method uses linear probes to generate confidence scores through flattened intermediate representations. Our transfer method involves a theoretically justified weighting of samples during the training of the simple model using confidence scores of these intermediate layers. The value of our method is first demonstrated on CIFAR10, where our weighting method significantly improves 34 networks with only a fraction of the number of Resnet blocks of a complex Resnet model. We further demonstrate operationally significant results on a real manufacturing problem, where we dramatically increase the test accuracy of a CART model the domain standard by roughly 13.
Nonstandard signatures of vectorlike quarks in a leptophobic 221 model ; We consider vectorlike quarks in a leptophobic 221 model characterized by the gauge group SU2Ltimes SU22 times U1X, where the SU22 is leptophobic in nature. We discuss about the pattern of mixing between Standard Model quarks and vectorlike quarks and how we prevent tree level flavourchanging interactions in the model. The model also predicts tauphilic scalars decaying mostly to tau leptons. We consider a typical signal of the model in the form of pair production of toptype vectorlike quarks which decays to the tauphilic scalars and a third generation quark. We analyze the resulting final state signal for the 13 TeV LHC, containing geq 3j1b , geq 2tau , geq 1l and discuss the discovery prospects of such vectorlike quarks with nonstandard decay modes.
Connecting modelbased and modelfree approaches to linear least squares regression ; In a regression setting with response vector mathbfy in mathbbRn and given regressors mathbfx1,ldots,mathbfxp in mathbbRn, a typical question is to what extent mathbfy is related to these regressors, specifically, how well can mathbfy be approximated by a linear combination of them. Classical methods for this question are based on statistical models for the conditional distribution of mathbfy, given the regressors mathbfxj. In the present paper it is shown that various pvalues resulting from this modelbased approach have also a purely dataanalytic, modelfree interpretation. This finding is derived in a rather general context. In addition, we introduce equivalence regions, a reinterpretation of confidence regions in the modelfree context.
Multitemporal Sentinel1 and 2 Data Fusion for Optical Image Simulation ; In this paper, we present the optical image simulation from a synthetic aperture radar SAR data using deep learning based methods. Two models, i.e., optical image simulation directly from the SAR data and from multitemporal SARoptical data, are proposed to testify the possibilities. The deep learning based methods that we chose to achieve the models are a convolutional neural network CNN with a residual architecture and a conditional generative adversarial network cGAN. We validate our models using the Sentinel1 and 2 datasets. The experiments demonstrate that the model with multitemporal SARoptical data can successfully simulate the optical image, meanwhile, the model with simple SAR data as input failed. The optical image simulation results indicate the possibility of SARoptical information blending for the subsequent applications such as largescale cloud removal, and optical data temporal superresolution. We also investigate the sensitivity of the proposed models against the training samples, and reveal possible future directions.
On Testing for Parameters in Ising Models ; We consider testing for the parameters of Ferromagnetic Ising models. While testing for the presence of possibly sparse magnetizations, we provide a general lower bound of minimax separation rates which yields sharp results in high temperature regimes. Our matching upper bounds are adaptive over both underlying dependence graph and temperature parameter. Moreover our results include the nearest neighbor model on lattices, the sparse ErdosR'enyi random graphs, and regular rooted trees right up to the critical parameter in the high temperature regime. We also provide parallel results for the entire low temperature regime in nearest neighbor model on lattices however in the plus boundary pure phase. Our results for the nearest neighbor model crucially depends on finite volume analogues of correlation decay property for both high and low temperature regimes the derivation of which borrows crucial ideas from FKpercolation theory and might be of independent interest. Finally, we also derive lower bounds for estimation and testing rates in two parameter Ising models which turn out to be optimal according to several recent results in this area.
Interpreted Execution of Business Process Models on Blockchain ; Blockchain technology provides a tamperproof mechanism to execute interorganizational business processes involving mutually untrusted parties. Existing approaches to blockchainbased process execution are based on code generation. In these approaches, a process model is compiled into one or more smart contracts, which are then deployed on a blockchain platform. Given the immutability of the deployed smart contracts, these compiled approaches ensure that all process instances conform to the process model. However, this advantage comes at the price of inflexibility. Any changes to the process model require the redeployment of the smart contracts a costly operation. In addition, changes cannot be applied to running process instances. To address this lack of flexibility, this paper presents an interpreter of BPMN process models based on dynamic data structures. The proposed interpreter is embedded in a business process execution system with a modular multilayered architecture, supporting the creation, execution, monitoring and dynamic update of process instances. For efficiency purposes, the interpreter relies on compact bitmapbased encodings of process models. An experimental evaluation shows that the proposed interpreted approach achieves comparable or lower costs relative to existing compiled approaches.
Comparison Study of WellKnown Inverted Pendulum Models for Balance Recovery in Humanoid Robot ; Bipedal robots are essentially unstable because of their complex kinematics as well as high dimensional state space dynamics, hence control and generation of stable walking is a complex subject and still one of the active topics in the robotic community. Nowadays, there are many humanoids performing stable walking, but fewer show effective push recovery under pushes. In this paper, we firstly review more common used abstract dynamics models for a humanoid robot which are based on the inverted pendulum and show how these models can be used to provide walking for a humanoid robot and also how a hierarchical control structure could fade the complexities of a humanoid walking. Secondly, the reviewed models are compared together not only in an analytical manner but also by performing several numerical simulations in a push recovery scenario using mboxMATLAB. These theoretical and simulation studies quantitatively compare these models regarding regaining balance. The results showed that the enhanced version of Linear Inverted Pendulum Plus Flywheel is the ablest dynamics model to regain the stability of the robot even in very challenging situations.
Multiway Encoding for Robustness ; Deep models are stateoftheart for many computer vision tasks including image classification and object detection. However, it has been shown that deep models are vulnerable to adversarial examples. We highlight how onehot encoding directly contributes to this vulnerability and propose breaking away from this widelyused, but highlyvulnerable mapping. We demonstrate that by leveraging a different output encoding, multiway encoding, we decorrelate source and target models, making target models more secure. Our approach makes it more difficult for adversaries to find useful gradients for generating adversarial attacks. We present robustness for blackbox and whitebox attacks on four benchmark datasets MNIST, CIFAR10, CIFAR100, and SVHN. The strength of our approach is also presented in the form of an attack for model watermarking, raising challenges in detecting stolen models.
Building a Production Model for RetrievalBased Chatbots ; Response suggestion is an important task for building humancomputer conversation systems. Recent approaches to conversation modeling have introduced new model architectures with impressive results, but relatively little attention has been paid to whether these models would be practical in a production setting. In this paper, we describe the unique challenges of building a production retrievalbased conversation system, which selects outputs from a whitelist of candidate responses. To address these challenges, we propose a dual encoder architecture which performs rapid inference and scales well with the size of the whitelist. We also introduce and compare two methods for generating whitelists, and we carry out a comprehensive analysis of the model and whitelists. Experimental results on a large, proprietary help desk chat dataset, including both offline metrics and a human evaluation, indicate productionquality performance and illustrate key lessons about conversation modeling in practice.
Circuitscape in Julia High Performance Connectivity Modelling to Support Conservation Decisions ; Connectivity across landscapes influences a wide range of conservationrelevant ecological processes, including species movements, gene flow, and the spread of wildfire, pests, and diseases. Recent improvements in remote sensing data suggest great potential to advance connectivity models, but computational constraints hinder these advances. To address this challenge, we upgraded the widelyused Circuitscape connectivity package to the high performance Julia programming language. Circuitscape.jl allows users to solve problems faster via improved parallel processing and solvers, and supports applications to larger problems e.g., datasets with hundreds of millions of cells. We document speed improvements of up to 1800. We also demonstrate scaling of problem sizes up to 437 million grid cells. These improvements allow modelers to work with higher resolution data, larger landscapes and perform sensitivity analysis effortlessly. These improvements accelerate the pace of innovation, helping modelers address pressing challenges like species range shifts under climate change. Our collaboration between ecologists and computer scientists has led to the use of connectivity models to inform conservation decisions. Further, these next generation connectivity models will produce results faster, facilitating stronger engagement with decisionmakers.
Coupled Variational Recurrent Collaborative Filtering ; We focus on the problem of streaming recommender system and explore novel collaborative filtering algorithms to handle the data dynamicity and complexity in a streaming manner. Although deep neural networks have demonstrated the effectiveness of recommendation tasks, it is lack of explorations on integrating probabilistic models and deep architectures under streaming recommendation settings. Conjoining the complementary advantages of probabilistic models and deep neural networks could enhance both model effectiveness and the understanding of inference uncertainties. To bridge the gap, in this paper, we propose a Coupled Variational Recurrent Collaborative Filtering CVRCF framework based on the idea of Deep Bayesian Learning to handle the streaming recommendation problem. The framework jointly combines stochastic processes and deep factorization models under a Bayesian paradigm to model the generation and evolution of users' preferences and items' popularities. To ensure efficient optimization and streaming update, we further propose a sequential variational inference algorithm based on a cross variational recurrent neural network structure. Experimental results on three benchmark datasets demonstrate that the proposed framework performs favorably against the stateoftheart methods in terms of both temporal dependency modeling and predictive accuracy. The learned latent variables also provide visualized interpretations for the evolution of temporal dynamics.
Quantum Manybody Scars in a Landau Level on a Thin Torus ; We study a kinetically constrained pair hopping model that arises within a Landau level in the quantum Hall effect. At filling nu 13, the model exactly maps onto the socalled PXP model, a constrained model for the Rydberg atom chain that is numerically known to exhibit ETHviolating states in the middle of the spectrum or quantum manybody scars. Indeed, particular charge density wave configurations exhibit the same revivals seen in the PXP model. We generalize the mapping to fillings factors nu p2p1, and show that the model is equivalent to nonintegrable spinchains within particular constrained Krylov Hilbert spaces. These lead to new examples of quantum manybody scars which manifest as revivals and slow thermalization of particular charge density wave states. Finally, we investigate the stability of the quantum scars under certain Hamiltonian perturbations motivated by the fractional quantum Hall physics.
Parametric Modelling Within Immersive Environments Building a Bridge Between Existing Tools and Virtual Reality Headsets ; Even though architectural modelling radically evolved over the course of its history, the current integration of Augmented Reality AR and Virtual RealityVR components in the corresponding design tasks is mostly limited to enhancing visualisation. Little to none of these tools attempt to tackle the challenge of modelling within immersive environments, that calls for new input modalities in order to move away from the traditional mouse and keyboard combination. In fact, relying on 2D devices for 3D manipulations does not seem to be effective as it does not offer the same degrees of freedom. We therefore present a solution that brings VR modelling capabilities to Grasshopper, a popular parametric design tool. Together with its associated proofofconcept application, our extension offers a glimpse at new perspectives in that field. By taking advantage of them,one can edit geometries with realtime feedback on the generated models, without ever leaving the virtual environment. The distinctive characteristics of VR applications provide a range of benefits without obstructing design activities. The designer can indeed experience the architectural models at full scale from a realistic pointofview and truly feels immersed right next to them.
Bayesian Clustering for ContinuousTime Hidden Markov Models ; We develop clustering procedures for longitudinal trajectories based on a continuoustime hidden Markov model CTHMM and a generalized linear observation model. Specifically in this paper, we carry out finite and infinite mixture modelbased clustering for a CTHMM and achieve inference using Markov chain Monte Carlo MCMC. For a finite mixture model with prior on the number of components, we implement reversiblejump MCMC to facilitate the transdimensional move between different number of clusters. For a Dirichlet process mixture model, we utilize restricted Gibbs sampling splitmerge proposals to expedite the MCMC algorithm. We employ proposed algorithms to the simulated data as well as a real data example, and the results demonstrate the desired performance of the new sampler.
Modeling Univariate and Multivariate Stochastic Volatility in R with stochvol and factorstochvol ; Stochastic volatility SV models are nonlinear statespace models that enjoy increasing popularity for fitting and predicting heteroskedastic time series. However, due to the large number of latent quantities, their efficient estimation is nontrivial and software that allows to easily fit SV models to data is rare. We aim to alleviate this issue by presenting novel implementations of four SV models delivered in two R packages. Several unique features are included and documented. As opposed to previous versions, stochvol is now capable of handling linear mean models, heavytailed SV, and SV with leverage. Moreover, we newly introduce factorstochvol which caters for multivariate SV. Both packages offer a userfriendly interface through the conventional R generics and a range of tailormade methods. Computational efficiency is achieved via interfacing R to C and doing the heavy work in the latter. In the paper at hand, we provide a detailed discussion on Bayesian SV estimation and showcase the use of the new software through various examples.
Model Comparison of Dark Energy models Using Deep Network ; This work uses a combination of a variational autoencoder and generative adversarial network to compare different dark energy models in light of observations, e.g., the distance modulus from type Ia supernovae. The network finds an analytical variational approximation to the true posterior of the latent parameters in the models, yielding consistent model comparison results with those derived by the standard Bayesian method, which suffers from a computationally expensive integral over the parameters in the product of the likelihood and the prior. The parallel computational nature of the network together with the stochastic gradient descent optimization technique leads to an efficient way to compare the physical models given a set of observations. The converged network also provides interpolation for a dataset, which is useful for data reconstruction.
Analytical Modeling of UAVtoVehicle Propagation Channels in BuiltUp Areas ; This letter presents an analytical path loss model for airground AG propagation between unmanned aerial vehicles UAVs and groundbased vehicles. We consider builtup areas, such as the ones defined by ITUR. The threedimensional 3D path loss model is based on propagation conditions and essential parameters are derived by using geometric methods. Owing to the generality, the analytical model is capable of arbitrary deployments of buildings, such as suburban, urban and dense urban. The analytical model is evaluated numerically, and validations conducted by raytracing simulations show the high accuracy of the proposed model. The closedform analytical formulas provide a useful tool for quick and accurate prediction of UAVtovehicle propagation channels.
The Price of Interpretability ; When quantitative models are used to support decisionmaking on complex and important topics, understanding a model's reasoning'' can increase trust in its predictions, expose hidden biases, or reduce vulnerability to adversarial attacks. However, the concept of interpretability remains loosely defined and applicationspecific. In this paper, we introduce a mathematical framework in which machine learning models are constructed in a sequence of interpretable steps. We show that for a variety of models, a natural choice of interpretable steps recovers standard interpretability proxies e.g., sparsity in linear models. We then generalize these proxies to yield a parametrized family of consistent measures of model interpretability. This formal definition allows us to quantify the price'' of interpretability, i.e., the tradeoff with predictive accuracy. We demonstrate practical algorithms to apply our framework on real and synthetic datasets.
A Compositional Framework for Scientific Model Augmentation ; Scientists construct and analyze computational models to understand the world. That understanding comes from efforts to augment, combine, and compare models of related phenomena. We propose SemanticModels.jl, a system that leverages techniques from static and dynamic program analysis to process executable versions of scientific models to perform such metamodeling tasks. By framing these metamodeling tasks as metaprogramming problems, SemanticModels.jl enables writing programs that generate and expand models. To this end, we present a category theorybased framework for defining metamodeling tasks, and extracting semantic information from model implementations, and show how this framework can be used to enhance scientific workflows in a working case study.
Evolution of KaluzaKlein Like Wet Dark Fluid in fR,T Theory of Gravitation ; Here we study the essence of fR,T gravitation theory in five dimensional Universe and see the role of dark energy in the form of wet dark fluid in such a Universe. It is found that the dark energy is not exaggerated in contributing to the accelerating expansion of the Universe though the expansion is inherent as a result of the theory itself and due to the geometric contribution of matter. It is interesting to see that in some model it is found that there was some era before the beginning of the present era, and some of the model Universe came out to be either oscillatory or cyclic. Some of the models are seen to go to Lambda CDM models in late future as in Einstein gravitation theory, starting the evolution with a big bang. Most of the models undergo early inflation as well as late time accelerating expansion thus defining as good models for real astrophysical situations, with dark energy playing fundamental role in these Universe.
Dynamical equivalence between Kuramoto models with first and higherorder coupling ; The Kuramoto model with highorder coupling has recently attracted some attention in the field of coupled oscillators in order, for instance, to describe clustering phenomena in sets of coupled agents. Instead of considering interactions given directly by the sine of oscillators' angle differences, the interaction is given by the sum of sines of integer multiples of these angle differences. This can be interpreted as a Fourier decomposition of a general 2piperiodic interaction function. We show that in the case where only one multiple of the angle differences is considered, which we refer to as the Kuramoto model with simple qthorder coupling, the system is dynamically equivalent to the original Kuramoto model. In other words, any property of the Kuramoto model with simple higherorder coupling can be recovered from the standard Kuramoto model.
Competing Models ; Different agents need to make a prediction. They observe identical data, but have different models they predict using different explanatory variables. We study which agent believes they have the best predictive ability as measured by the smallest subjective posterior mean squared prediction error and show how it depends on the sample size. With small samples, we present results suggesting it is an agent using a lowdimensional model. With large samples, it is generally an agent with a highdimensional model, possibly including irrelevant variables, but never excluding relevant ones. We apply our results to characterize the winning model in an auction of productive assets, to argue that entrepreneurs and investors with simple models will be overrepresented in new sectors, and to understand the proliferation of factors that explain the crosssectional variation of expected stock returns in the assetpricing literature.
Property Graph Exchange Format ; Recently, a variety of database implementations adopting the property graph model have emerged. However, interoperable management of graph data on these implementations is challenging due to the differences in data models and formats. Here, we redefine the property graph model incorporating the differences in the existing models and propose interoperable serialization formats for property graphs. The model is independent of specific implementations and provides a basis of interoperable management of property graph data. The proposed serialization is not only general but also intuitive, thus it is useful for creating and maintaining graph data. To demonstrate the practical use of our model and serialization, we implemented converters from our serialization into existing formats, which can then be loaded into various graph databases. This work provides a basis of an interoperable platform for creating, exchanging, and utilizing property graph data.
Undamped Bloch Oscillations in the Urightarrow infty onedimensional Hubbard model ; The Urightarrow infty onedimensional Hubbard model in an electric field has be exactly solved, with an emphasis on the charge current. It is found that undamped Bloch oscillations extensively exist in the system. Such conclusion has also been discussed for more general cases and we find that it is closely related to the temporal periodicity of the model Hamiltonian in electric field, rather than to the integrability of the model. As a comparison, we have also studied a model of electrons with deltafunction interactions in continuous space, which is closely related to the Hubbard model, but is nonintegrable; and we find that the charge current strangely shows a dissipationless behaior which is comparable with the undamped Bloch oscillations.
Deep Lagrangian Networks Using Physics as Model Prior for Deep Learning ; Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data. For many important realworld applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for modelbased control requires robust extrapolation from fewer samples often collected online in realtime and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks DeLaN as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system i.e., system dynamics with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in realtime
Constrained affine Gaudin models and diagonal YangBaxter deformations ; We review and pursue further the study of constrained realisations of affine Gaudin models, which form a large class of twodimensional integrable field theories with gauge symmetries. In particular, we develop a systematic gauging procedure which allows to reformulate the nonconstrained realisations of affine Gaudin models considered recently in JHEP 06 2019 017 as equivalent models with a gauge symmetry. This reformulation is then used to construct integrable deformations of these models breaking their diagonal symmetry. In a second time, we apply these general methods to the integrable coupled sigmamodel introduced recently, whose target space is the Nfold Cartesian product G0N of a real semisimple Lie group G0. We present its gauged formulation as a model on G0N1 with a gauge symmetry acting as the right multiplication by the diagonal subgroup G0textdiag and construct its diagonal homogeneous YangBaxter deformation.
Flavour anomalies and fundamental partial compositeness ; Several measurements of Bmeson decay observables show deviations from Standard Model SM predictions, some of them hinting at violation of lepton flavour universality LFU. I discusses how the anomalies in rare B decays can be explained by partial compositeness. Partial compositeness is a key ingredient of models with a composite Higgs boson and generically leads to violation of LFU. After presenting a simple model with partial compositeness that is able to explain the anomalies in rare B decays, the flavour phenomenology of a minimal UV completion of a composite Higgs model with partial compositeness is discussed the minimal fundamental partial compositeness MFPC model. A virtue of the MFPC model is its capability of serving both as a solution to the naturalness problem of the SM and as an explanation of the flavour anomalies in rare Bmeson decays. In view of recent new measurements, the results on which this proceedings contribution is based are updated.
A Deep Neural Network for Finger Counting and Numerosity Estimation ; In this paper, we present neurorobotics models with a deep artificial neural network capable of generating finger counting positions and number estimation. We first train the model in an unsupervised manner where each layer is treated as a Restricted Boltzmann Machine or an autoencoder. Such a model is further trained in a supervised way. This type of pretraining is tested on our baseline model and two methods of pretraining are compared. The network is extended to produce finger counting positions. The performance in number estimation of such an extended model is evaluated. We test the hypothesis if the subitizing process can be obtained by one single model used also for estimation of higher numerosities. The results confirm the importance of unsupervised training in our enumeration task and show some similarities to human behaviour in the case of subitizing.
Correlations between azimuthal anisotropy Fourier harmonics in PbPb collisions at sqrtsmathrmNN2.76 TeV in the HYDJET and AMPT models ; Correlations between azimuthal anisotropy Fourier harmonics vn n 2, 3, 4 are studied using the events from PbPb collisions at sqrtsmathrmNN2.76 TeV generated by the HYDJET and AMPT models, and compared to the corresponding experimental results obtained by the ATLAS Collaboration. The Fourier harmonics vn are measured over a wide centrality range using the twoparticle azimuthal correlation method. The slopes of the v2v3 correlation from both models are in a good agreement with the ATLAS data. The HYDJET model predicts a stronger slope for the v2v4 and v3v4 correlations than the ones experimentally measured, while the results from the AMPT model are in a rather good agreement with the experimental results. In contrast to the HYDJET predictions, the AMPT model predicts a boomeranglike shape in the structure of the correlations as found in the experimental data.
The Dynamic Embedded Topic Model ; Topic modeling analyzes documents to learn meaningful patterns of words. For documents collected in sequence, dynamic topic models capture how these patterns vary over time. We develop the dynamic embedded topic model DETM, a generative model of documents that combines dynamic latent Dirichlet allocation DLDA and word embeddings. The DETM models each word with a categorical distribution parameterized by the inner product between the word embedding and a pertimestep embedding representation of its assigned topic. The DETM learns smooth topic trajectories by defining a random walk prior over the embedding representations of the topics. We fit the DETM using structured amortized variational inference with a recurrent neural network. On three different corporaa collection of United Nations debates, a set of ACL abstracts, and a dataset of Science Magazine articleswe found that the DETM outperforms DLDA on a document completion task. We further found that the DETM learns more diverse and coherent topics than DLDA while requiring significantly less time to fit.
Intrinsically symmetric cosmological model in the presence of dissipative fluids ; Based upon the intrinsic symmetries approach to inhomogeneous cosmologies, we propose an exact solution to Einstein's field equations where the spatial sections are flat and the source is a nonperfect fluid such that the dissipative terms can be written in terms of spatial gradients of the energy density under a suitable choice of the coordinate system. It is shown through the calculation of the luminosity distance as a function of the redshift that the presence of such inhomogeneities may lead to an effective deceleration parameter compatible with either the standard LambdaCDM model or LTB models depending on the choice of boundary conditions with no exotic matter. This fact is another evidence that different inhomogeneous models should be carefully investigated in order to verify which model may be compatible with observations and still be as close as possible to the standard model regarding the underlying assumptions, without resorting necessarily to exotic matter components.
The BregmanTweedie Classification Model ; This work proposes the BregmanTweedie classification model and analyzes the domain structure of the extended exponential function, an extension of the classic generalized exponential function with additional scaling parameter, and related highlevel mathematical structures, such as the BregmanTweedie loss function and the BregmanTweedie divergence. The base function of this divergence is the convex function of Legendre type induced from the extended exponential function. The BregmanTweedie loss function of the proposed classification model is the regular Legendre transformation of the BregmanTweedie divergence. This loss function is a polynomial parameterized function between unhinge loss and the logistic loss function. Actually, we have two submodels of the BregmanTweedie classification model; HBregman with hingelike loss function and LBregman with logisticlike loss function. Although the proposed classification model is nonconvex and unbounded, empirically, we have observed that the HBregman and LBregman outperform, in terms of the Friedman ranking, logistic regression and SVM and show reasonable performance in terms of the classification accuracy in the category of the binary linear classification problem.
LevyIto Models in Finance ; We present an overview of the broad class of financial models in which the prices of assets are L'evyIto processes driven by an ndimensional Brownian motion and an independent Poisson random measure. The Poisson random measure is associated with an ndimensional L'evy process. Each model consists of a pricing kernel, a money market account, and one or more risky assets. We show how the excess rate of return above the interest rate can be calculated for risky assets in such models, thus showing the relationship between risk and return when asset prices have jumps. The framework is applied to a variety of asset classes, allowing one to construct new models as well as interesting generalizations of familiar models.
A Stock Market Model Based on CAPM and Market Size ; We introduce a new system of stochastic differential equations which models dependence of market beta and unsystematic risk upon size, measured by market capitalization. We fit our model using size deciles data from Kenneth French's data library. This model is somewhat similar to generalized volatilitystabilized models in Pal, 2011; Pickova, 2013. The novelty of our work is twofold. First, we take into account the difference between price and total returns in other words, between market size and wealth processes. Second, we work with actual market data. We study the longterm properties of this system of equations, and reproduce observed linearity of the capital distribution curve. Our model has two modifications for price returns and for equity premium. Somewhat surprisingly, they exhibit the same fit, with very similar coefficients. In the Appendix, we analyze sizebased realworld index funds.
Recurrent Neural Networks with Long Term Temporal Dependencies in Machine Tool Wear Diagnosis and Prognosis ; Datadriven approaches to automated machine condition monitoring are gaining popularity due to advancements made in sensing technologies and computing algorithms. This paper proposes the use of a deep learning model, based on Long ShortTerm Memory LSTM architecture for a recurrent neural network RNN which captures long term dependencies for modeling sequential data. In the context of estimating cutting tool wear amounts, this LSTM based RNN approach utilizes a system transition and system observation function based on a minimally intrusive vibration sensor signal located near the workpiece fixtures. By applying an LSTM based RNN, the method helps to avoid building an analytic model for specific tool wear machine degradation, overcoming the assumptions made by Hidden Markov Models, Kalman filter, and Particle filter based approaches. The proposed approach is tested using experiments performed on a milling machine. We have demonstrated onestep and twostep look ahead cutting tool state prediction using online indirect measurements obtained from vibration signals. Additionally, the study also estimates remaining useful life RUL of a machine cutting tool insert through generative RNN. The experimental results show that our approach, applying the LSTM to model system observation and transition function is able to outperform the functions modeled with a simple RNN.
Gradientbased adversarial attacks on categorical sequence models via traversing an embedded world ; Deep learning models suffer from a phenomenon called adversarial attacks we can apply minor changes to the model input to fool a classifier for a particular example. The literature mostly considers adversarial attacks on models with images and other structured inputs. However, the adversarial attacks for categorical sequences can also be harmful. Successful attacks for inputs in the form of categorical sequences should address the following challenges 1 nondifferentiability of the target function, 2 constraints on transformations of initial sequences, and 3 diversity of possible problems. We handle these challenges using two blackbox adversarial attacks. The first approach adopts a MonteCarlo method and allows usage in any scenario, the second approach uses a continuous relaxation of models and target metrics, and thus allows usage of stateoftheart methods for adversarial attacks with little additional effort. Results for money transactions, medical fraud, and NLP datasets suggest that proposed methods generate reasonable adversarial sequences that are close to original ones but fool machine learning models.
No single unification theory of everything ; In light of Godel's undecidability results incomplete theorems for math, quantum indeterminism indicates that physics and the Universe may be indeterministic, incomplete, and open in nature, and therefore demand no single unification theory of everything. The Universe is dynamic and so are the underlying physical models and spacetime. As the 4d spacetime evolves dimension by dimension in the early universe, consistent yet different models emerge one by one with different sets of particles and interactions. A new set of first principles are proposed for building such models with new understanding of supersymmetry, mirror symmetry, and the dynamic phase transition mechanism spontaneous symmetry breaking. Under this framework, we demonstrate that different models with no theory of everything operate in a hierarchical yet consistent way at different phases or scenarios of the Universe. In particular, the arrow of time is naturally explained and the Standard Model of physics is elegantly extended to time zero of the Universe.
Dynamic transformation of prior knowledge into Bayesian models for data streams ; We consider how to effectively use prior knowledge when learning a Bayesian model from streaming environments where the data come infinitely and sequentially. This problem is highly important in the era of data explosion and rich sources of precious external knowledge such as pretrained models, ontologies, Wikipedia, etc. We show that some existing approaches can forget any knowledge very fast. We then propose a novel framework that enables to incorporate the prior knowledge of different forms into a base Bayesian model for data streams. Our framework subsumes some existing popular models for timeseriesdynamic data. Extensive experiments show that our framework outperforms existing methods with a large margin. In particular, our framework can help Bayesian models generalize well on extremely short text while other methods overfit. The implementation of our framework is available at httpsgithub.combachtranxuanTPS.git.
SemiModular Inference enhanced learning in multimodular models by tempering the influence of components ; Bayesian statistical inference loses predictive optimality when generative models are misspecified. Working within an existing coherent lossbased generalisation of Bayesian inference, we show existing ModularCutmodel inference is coherent, and write down a new family of SemiModular Inference SMI schemes, indexed by an influence parameter, with Bayesian inference and Cutmodels as special cases. We give a metalearning criterion and estimation procedure to choose the inference scheme. This returns Bayesian inference when there is no misspecification. The framework applies naturally to Multimodular models. Cutmodel inference allows directed information flow from wellspecified modules to misspecified modules, but not vice versa. An existing alternative power posterior method gives tunable but undirected control of information flow, improving prediction in some settings. In contrast, SMI allows tunable and directed information flow between modules. We illustrate our methods on two standard test cases from the literature and a motivating archaeological data set.
Slow time scales in a dense vibrofluidized granular material ; Modeling collective motion in nonconservative systems, such as granular materials, is difficult since a general microscopictomacroscopic approach is not available there is no Hamiltonian, no known stationary densities in phase space, not a known small set of relevant variables. Phenomenological coarsegrained models are a good alternative, provided that one has identified a few slow observables and collected a sufficient amount of data for their dynamics. Here we study the case of a vibrofluidized dense granular material. The experimental study of a tracer, dispersed into the media, showed the evidence of many time scales fast ballistic, intermediate caged, slow superdiffusive, very slow diffusive. A numerical investigation has demonstrated that tracer's superdiffusion is related to slow rotating drifts of the granular medium. Here we offer a deeper insight into the slow scales of the granular medium, and propose a new phenomenological model for such a secular dynamics. Based upon the model for the granular medium, we also introduce a model for the tracer fast and slow dynamics, which consists in a stochastic system of equations for three coupled variables, and is therefore more refined and successful than previous models.
Continuous QoE Prediction Based on WaveNet ; Continuous QoE prediction is crucial in the purpose of maximizing viewer satisfaction, by which video service providers could improve the revenue. Continuously predicting QoE is challenging since it requires QoE models that are capable of capturing the complex dependencies among QoE influence factors. The existing approaches that utilize LongShortTermMemory LSTM network successfully model such longterm dependencies, providing the superior QoE prediction performance. However, the inherent drawback of sequential computing of LSTM will result in high computational cost in training and prediction tasks. Recently, WaveNet, a deep neural network for generating raw audio waveform, has been introduced. Immediately, it gains a great attention since it successfully leverages the characteristic of parallel computing of causal convolution and dilated convolution to deal with timeseries data e.g., audio signal. Being inspired by the success of WaveNet, in this paper, we propose WaveNetbased QoE model for continuous QoE prediction in video streaming services. The model is trained and tested upon on two publicly available databases, namely, LFOVIA Video QoE and LIVE Mobile Stall Video II. The experimental results demonstrate that the proposed model outperforms the baselines models in terms of processing time, while maintaining sufficient accuracy.
Bayesian Nonparametric Density Autoregression with Lag Selection ; We develop a Bayesian nonparametric autoregressive model applied to flexibly estimate general transition densities exhibiting nonlinear lag dependence. Our approach is related to Bayesian density regression using Dirichlet process mixtures, with the Markovian likelihood defined through the conditional distribution obtained from the mixture. This results in a Bayesian nonparametric extension of a mixturesofexperts model formulation. We address computational challenges to posterior sampling that arise from the Markovian structure in the likelihood. The base model is illustrated with synthetic data from a classical model for population dynamics, as well as a series of waiting times between eruptions of Old Faithful Geyser. We study inferences available through the base model before extending the methodology to include automatic relevance detection among a prespecified set of lags. Inference for global and local lag selection is explored with additional simulation studies, and the methods are illustrated through analysis of an annual time series of pink salmon abundance in a stream in Alaska. We further explore and compare transition density estimation performance for alternative configurations of the proposed model.
A nearlyneutral biallelic Moran model with biased mutation and linear and quadratic selection ; In this article, a biallelic reversible mutation model with linear and quadratic selection is analyzed. The approach reconnects to one proposed by Kimura Possibility of extensive neutral evolution under stabilizing selection with special reference to nonrandom use of codons PNAS,1981, who starts from a diffusion model and derives its equilibrium distribution up to a constant. We use a boundarymutation Moran model, which approximates a general mutation model for small effective mutation rates, and derive its equilibrium distribution for polymorphic and monomorphic variants in small to moderately sized populations. Using this model, we show that biased mutation rates and linear selection alone can cause patterns of polymorphism rates within and substitution rates between populations that are usually ascribed to balancing or overdominant selection. We illustrate this using a data set of short introns and fourfold degenerate sites from Drosophila simulans and Drosophila melanogaster.
A Simple Fix for Convolutional Neural Network via Coordinate Embedding ; Convolutional Neural Networks CNN has been widely applied in the realm of computer vision. However, given the fact that CNN models are translation invariant, they are not aware of the coordinate information of each pixel. Thus the generalization ability of CNN will be limited since the coordinate information is crucial for a model to learn affine transformations which directly operate on the coordinate of each pixel. In this project, we proposed a simple approach to incorporate the coordinate information to the CNN model through coordinate embedding. Our approach does not change the downstream model architecture and can be easily applied to the pretrained models for the task like object detection. Our experiments on the German Traffic Sign Detection Benchmark show that our approach not only significantly improve the model performance but also have better robustness with respect to the affine transformation.
Felix Flexible Text Editing Through Tagging and Insertion ; We present Felix a flexible textediting approach for generation, designed to derive the maximum benefit from the ideas of decoding with bidirectional contexts and selfsupervised pretraining. In contrast to conventional sequencetosequence seq2seq models, Felix is efficient in lowresource settings and fast at inference time, while being capable of modeling flexible inputoutput transformations. We achieve this by decomposing the textediting task into two subtasks tagging to decide on the subset of input tokens and their order in the output text and insertion to infill the missing tokens in the output not present in the input. The tagging model employs a novel Pointer mechanism, while the insertion model is based on a Masked Language Model. Both of these models are chosen to be nonautoregressive to guarantee faster inference. Felix performs favourably when compared to recent textediting methods and strong seq2seq baselines when evaluated on four NLG tasks Sentence Fusion, Machine Translation Automatic PostEditing, Summarization, and Text Simplification.
Spatially homogeneous models of Stackel spacetimes of type 2.1 ; All classes of spatially homogeneous spacetime models are found that allow the integration of the equations of motion of test particles and the eikonal equation by the method of complete separation of variables according to type 2.1. Four classes of model data are obtained. The resulting models can be applied in any modified metric theories of gravity. Two of the above models allow solutions of the Einstein equations with a cosmological constant and radiation. For the models of a spatially homogeneous Universe with a cosmological constant and radiation obtained in Einstein's theory of gravity, the HamiltonJacobi equations of motion of the test particles and the eikonal equation for radiation are integrated by the method of separation of variables.
Fold bifurcation entangled surfaces for onedimensional Kitaev lattice model ; We investigate feasible holography with the Kitaev model using dilatonic gravity in AdS2. We propose a generic dual theory of gravity in the AdS2 and suggest that this bulk action is a suitable toy model in studying quantum mechanics in the Kitaev model using gaugegravity duality. This gives a possible equivalent description for the Kitaev model in the dual gravity bulk. Scalar and tensor perturbations are investigated in detail. In the case of near AdS perturbation, we show that the geometry still freezes as is AdS, while the dilation perturbation decays at the AdS boundary safely. The timedependent part of the perturbation is an oscillatory model. We discover that the dual gravity induces an effective and renormalizable quantum action. The entanglement entropy for bulk theory is computed using extremal surfaces. We prove that these surfaces have a fold bifurcation regime of criticality. Our approach shows directly that chaos in AdS2 can be understood via fold bifurcation minimal surfaces.
Primordial Black Holes as Dark Matter ; We investigate models in which a spectrum of black holes with Hawking temperature of order the radiation temperature at the beginning of the radiation dominated era can survive long enough to produce a matter dominated era at the observed crossover between matter and radiation in our universe. We find that a sufficiently dense population of such black holes can indeed do so. The stronger observational constraint, that the black holes have lifetimes at least as long as the current age of the universe is harder to assess, because of black hole mergers during the matter dominated era. We then investigate whether the required densities and masses are consistent with the Holographic Spacetime HST model of inflation. We find that they are, but put mild constraints on the slow roll parameter epsilon fracdotHH2 in that model to be small. The bound is no stronger than the observational bound on the model's prediction for tensor fluctuations. The required black hole density, at the reheat temperature, in a model with a single species of black hole, must be viewed as a quantum mechanical accident. In such a model, our universe exists because of a low probability quantum fluctuation.
DataDriven Option Pricing using Single and MultiAsset Supervised Learning ; We propose three different datadriven approaches for pricing Europeanstyle call options using supervised machinelearning algorithms. These approaches yield models that give a range of fair prices instead of a single price point. The performance of the models are tested on two stock market indices NIFTY50 and BANKNIFTY from the Indian equity market. Although neither historical nor implied volatility is used as an input, the results show that the trained models have been able to capture the option pricing mechanism better than or similar to the BlackScholes formula for all the experiments. Our choice of scale free IO allows us to train models using combined data of multiple different assets from a financial market. This not only allows the models to achieve far better generalization and predictive capability, but also solves the problem of paucity of data, the primary limitation of using machine learning techniques. We also illustrate the performance of the trained models in the period leading up to the 2020 Stock Market Crash Jan 2019 to April 2020.
Modifying the NetworkBased Stochastic SEIR Model to Account for Quarantine ; In this article, we present a modification to the networkbased stochastic SEIR epidemic model which allows for modifications to the underlying contact network to account for the effects of quarantine. We also discuss the changes needed to the model to incorporate situations where some proportion of the individuals who are infected remain asymptomatic throughout the course of the disease. Using a generic network model where every potential contact exists with the same common probability, we conduct a simulation study in which we vary four key model parameters transmission rate, probability of remaining asymptomatic, and the mean lengths of time spent in the Exposed and Infectious disease states and examine the resulting impacts on various metrics of epidemic severity, including the effective reproduction number. We find that the mean length of time spent in the Infectious state and the transmission rate are the most important model parameters, while the mean length of time spent in the Exposed state and the probability of remaining asymptomatic are less important.
Dynamics of the TysonHongThronNovak circadian oscillator model ; We study the dynamics of a circadian oscillator model which was proposed by Tyson, Hong, Thron and Novak. This model indicates a molecular mechanism for the circadian rhythm in Drosophila. After giving a detailed study of the equilibria, we further investigate the effects of the rates of mRNA degradation and synthesis. When the rate of mRNA degradation is rather fast, we prove that there are no periodic orbits in this model. When the rate of mRNA degradation is slow enough, this model is transformed into a slowfast system. Then based on the geometric singular perturbation theory, we prove the existence of canard explosion, relaxation oscillations, homoclinicheteroclinic orbits and saddlenode bifurcations as the rates of mRNA degradation and synthesis change. Finally, we give the biological interpretation of the obtained results and point out that this model can be transformed into a Li'enardlike equation, which could be helpful to investigate the dynamics of the general case.
ConvBERT Improving BERT with Spanbased Dynamic Convolution ; Pretrained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global selfattention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel spanbased dynamic convolution to replace these selfattention heads to directly model local dependencies. The novel convolution heads, together with the rest selfattention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 14 training cost. Code and pretrained models will be released.
Stationary solutions for dyadic mixed model of the Euler equation. A complete spectrum ; Dyadic models of the Euler equations were introduced as toy models to study the behaviour of an inviscid fluid in turbulence theory. In 1974 Novikov proposed a generalized mixed dyadic model that extends both KatzPavlovic and Obukhov models giving birth to a more complex structure no results were found in literature until 2015 where blow up in finite time for smooth solutions and existence of selfsimilar solution for particular values of the model parameters were shown by Jeong I.J. We extend such partial results by giving a complete spectrum of existence and uniqueness results for two cardinal classes of finite energy stationary solutions, namely constant and selfsimilar solutions.
Deep Filtering ; This paper develops a deep learning method for linear and nonlinear filtering. The idea is to start with a nominal dynamic model and generate Monte Carlo sample paths. Then these samples are used to train a deep neutral network. A least square error is used as a loss function for network training. Then the resulting weights are applied to Monte Carlo sampl es from an actual dynamic model. The deep filter obtained in such a way compares favorably to the traditional Kalman filter in linear cases and the extended Kalman filter in nonlinear cases. Moreover, a switching model with jumps is studied to show the adaptiveness and power of our deep filtering method. A main advantage of deep filtering is its robustness when the nominal model and actual model differ. Another advantage of deep filtering is that real data can be used directly to train the deep neutral network. Therefore, one does not need to calibrate the model.
Multimodal Deep Generative Models for Trajectory Prediction A Conditional Variational Autoencoder Approach ; Human behavior prediction models enable robots to anticipate how humans may react to their actions, and hence are instrumental to devising safe and proactive robot planning algorithms. However, modeling complex interaction dynamics and capturing the possibility of many possible outcomes in such interactive settings is very challenging, which has recently prompted the study of several different approaches. In this work, we provide a selfcontained tutorial on a conditional variational autoencoder CVAE approach to human behavior prediction which, at its core, can produce a multimodal probability distribution over future human trajectories conditioned on past interactions and candidate robot future actions. Specifically, the goals of this tutorial paper are to review and build a taxonomy of stateoftheart methods in human behavior prediction, from physicsbased to purely datadriven methods, provide a rigorous yet easily accessible description of a datadriven, CVAEbased approach, highlight important design characteristics that make this an attractive model to use in the context of modelbased planning for humanrobot interactions, and provide important design considerations when using this class of models.
Evaluating the Impact of Knowledge Graph Context on Entity Disambiguation Models ; Pretrained Transformer models have emerged as stateoftheart approaches that learn contextual information from text to improve the performance of several NLP tasks. These models, albeit powerful, still require specialized knowledge in specific scenarios. In this paper, we argue that context derived from a knowledge graph in our case Wikidata provides enough signals to inform pretrained transformer models and improve their performance for named entity disambiguation NED on Wikidata KG. We further hypothesize that our proposed KG context can be standardized for Wikipedia, and we evaluate the impact of KG context on stateoftheart NED model for the Wikipedia knowledge base. Our empirical results validate that the proposed KG context can be generalized for Wikipedia, and providing KG context in transformer architectures considerably outperforms the existing baselines, including the vanilla transformer models.
LSTM Acoustic Models Learn to Align and Pronounce with Graphemes ; Automated speech recognition coverage of the world's languages continues to expand. However, standard phoneme based systems require handcrafted lexicons that are difficult and expensive to obtain. To address this problem, we propose a training methodology for a graphemebased speech recognizer that can be trained in a purely datadriven fashion. Built with LSTM networks and trained with the crossentropy loss, the graphemeoutput acoustic models we study are also extremely practical for realworld applications as they can be decoded with conventional ASR stack components such as language models and FST decoders, and produce good quality audiotographeme alignments that are useful in many speech applications. We show that the grapheme models are competitive in WER with their phonemeoutput counterparts when trained on large datasets, with the advantage that grapheme models do not require explicit linguistic knowledge as an input. We further compare the alignments generated by the phoneme and grapheme models to demonstrate the quality of the pronunciations learnt by them using four Indian languages that vary linguistically in spoken and written forms.
Language Models as FewShot Learner for TaskOriented Dialogue Systems ; Taskoriented dialogue systems use four connected modules, namely, Natural Language Understanding NLU, a Dialogue State Tracking DST, Dialogue Policy DP and Natural Language Generation NLG. A research challenge is to learn each module with the least amount of samples i.e., fewshots given the high cost related to the data collection. The most common and effective technique to solve this problem is transfer learning, where large language models, either pretrained on text or taskspecific data, are finetuned on the few samples. These methods require finetuning steps and a set of parameters for each task. Differently, language models, such as GPT2 Radford et al., 2019 and GPT3 Brown et al., 2020, allow fewshot learning by priming the model with few examples. In this paper, we evaluate the priming fewshot ability of language models in the NLU, DST, DP and NLG tasks. Importantly, we highlight the current limitations of this approach, and we discuss the possible implication for future work.
A Deep Dive into Adversarial Robustness in ZeroShot Learning ; Machine learning ML systems have introduced significant advances in various fields, due to the introduction of highly complex models. Despite their success, it has been shown multiple times that machine learning models are prone to imperceptible perturbations that can severely degrade their accuracy. So far, existing studies have primarily focused on models where supervision across all classes were available. In constrast, Zeroshot Learning ZSL and Generalized Zeroshot Learning GZSL tasks inherently lack supervision across all classes. In this paper, we present a study aimed on evaluating the adversarial robustness of ZSL and GZSL models. We leverage the wellestablished label embedding model and subject it to a set of established adversarial attacks and defenses across multiple datasets. In addition to creating possibly the first benchmark on adversarial robustness of ZSL models, we also present analyses on important points that require attention for better interpretation of ZSL robustness results. We hope these points, along with the benchmark, will help researchers establish a better understanding what challenges lie ahead and help guide their work.
Symmetry enhancement in a twologarithm matrix model and the canonical tensor model ; I study a onematrix model of a real symmetric matrix with a potential which is a sum of two logarithmic functions and a harmonic one. This twologarithm matrix model is the absolute square norm of a toy wave function which is obtained by replacing the tensor argument of the wave function of the canonical tensor model CTM with a matrix. I discuss a symmetry enhancement phenomenon in this matrix model and show that symmetries and dimensions of emergent spaces are stable only in a phase which exists exclusively for the positive cosmological constant case in the sense of CTM. This would imply the importance of the positivity of the cosmological constant in the emergence phenomena in CTM.
Shallow Water Moment models for bedload transport problems ; In this work a simple but accurate shallow model for bedload sediment transport is proposed. The model is based on applying the moment approach to the Shallow Water Exner model, making it possible to recover the vertical structure of the flow. This approach allows us to obtain a better approximation of the fluid velocity close to the bottom, which is the relevant velocity for the sediment transport. A general Shallow Water Exner moment model allowing for polynomial velocity profiles of arbitrary order is obtained. A regularization ensures hyperbolicity and easy computation of the eigenvalues. The system is solved by means of an adapted IFCP scheme proposed here. The improvement of this IFCP type scheme is based on the approximation of the eigenvalue associated to the sediment transport. Numerical tests are presented which deal with large and short time scales. The proposed model allows to obtain the vertical structure of the fluid, which results in a better description on the bedload transport of the sediment layer.
Transformer based Multilingual document Embedding model ; One of the current stateoftheart multilingual document embedding model LASER is based on the bidirectional LSTM neural machine translation model. This paper presents a transformerbased sentencedocument embedding model, TLASER, which makes three significant improvements. Firstly, the BiLSTM layers is replaced by the attentionbased transformer layers, which is more capable of learning sequential patterns in longer texts. Secondly, due to the absence of recurrence, TLASER enables faster parallel computations in the encoder to generate the text embedding. Thirdly, we augment the NMT translation loss function with an additional novel distance constraint loss. This distance constraint loss would further bring the embeddings of parallel sentences close together in the vector space; we call the TLASER model trained with distance constraint, cTLASER. Our cTLASER model significantly outperforms both BiLSTMbased LASER and the simpler transformerbased TLASER.
Comparison of Two Analytic Energy Balance Models Shows Stable Partial Ice Cover Possible for Any Obliquity ; In this study, we compare two analytic energy balance models with explicit dependence on obliquity to study the likelihood of different stable ice configurations. We compare the results of models with different methods of heat transport and different insolation distributions. We show that stable partial ice cover is possible for any obliquity, provided the insolation distribution is sufficiently accurate. Additionally, we quantify the severity of the transition to the Snowball state as different model parameters are varied. In accordance with an earlier study, transitions to the Snowball state are more severe for higher values of the albedo contrast and energy transport across latitudes in both model; however, we find that the Snowball transition is not equally likely across both models. This work is general enough to apply to any rapidly rotating planet and could be used to study the likelihood of Snowball transitions on planets within the habitable region of other stars.
Discovering Useful Sentence Representations from Large Pretrained Language Models ; Despite the extensive success of pretrained language models as encoders for building NLP systems, they haven't seen prominence as decoders for sequence generation tasks. We explore the question of whether these models can be adapted to be used as universal decoders. To be considered universal, a decoder must have an implicit representation for any target sentence s, such that it can recover that sentence exactly when conditioned on its representation. For large transformerbased language models trained on vast amounts of English text, we investigate whether such representations can be easily discovered using standard optimization methods. We present and compare three representation injection techniques for transformerbased models and three accompanying methods which map sentences to and from this representation space. Experiments show that not only do representations exist for sentences from a variety of genres. More importantly, without needing complex optimization algorithms, our methods recover these sentences almost perfectly without finetuning the underlying language model at all.
Perceptual underwater image enhancement with deep learning and physical priors ; Underwater image enhancement, as a preprocessing step to improve the accuracy of the following object detection task, has drawn considerable attention in the field of underwater navigation and ocean exploration. However, most of the existing underwater image enhancement strategies tend to consider enhancement and detection as two independent modules with no interaction, and the practice of separate optimization does not always help the underwater object detection task. In this paper, we propose two perceptual enhancement models, each of which uses a deep enhancement model with a detection perceptor. The detection perceptor provides coherent information in the form of gradients to the enhancement model, guiding the enhancement model to generate patch level visually pleasing images or detection favourable images. In addition, due to the lack of training data, a hybrid underwater image synthesis model, which fuses physical priors and datadriven cues, is proposed to synthesize training data and generalise our enhancement model for realworld underwater images. Experimental results show the superiority of our proposed method over several stateoftheart methods on both realworld and synthetic underwater datasets.