text
stringlengths
62
2.94k
When Being Unseen from mBERT is just the Beginning Handling New Languages With Multilingual Language Models ; Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach stateoftheart performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available largescale multilingual language model and for which only a small amount of raw data is generally available. In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages. Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not. Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages. Transliterating those languages improves very significantly the ability of largescale multilingual language models on downstream tasks.
Differentially Private Gradient Expectation Maximization Algorithm with Statistical Guarantees ; Gradient Expectation Maximization EM is a widely used algorithm for estimating the maximum likelihood of mixture models or incomplete data problems. A major challenge facing this popular technique is how to effectively preserve the privacy of sensitive data. Previous research on this problem has already lead to the discovery of some Differentially Private DP algorithms for Gradient EM. However, unlike in the nonprivate case, existing techniques are not yet able to provide finite sample statistical guarantees. To address this issue, we propose in this paper the first DP version of Gradient EM algorithm with statistical guarantees. Moreover, we apply our general framework to three canonical models Gaussian Mixture Model GMM, Mixture of Regressions Model MRM and Linear Regression with Missing Covariates RMC. Specifically, for GMM in the DP model, our estimation error is near optimal in some cases. For the other two models, we provide the first finite sample statistical guarantees. Our theory is supported by thorough numerical experiments.
fR,T Gravity Model Behaving as a Dark Energy Source ; Within the limits of the present cosmological observations in fR,T gravity theory, we have analyzed a spherically symmetric spacetime in 5D setting. The field equations have been carefully studied considering reasonable cosmological assumptions to obtain exact solutions. We have obtained an isotropic model universe undergoing superexponential expansion. It is predicted that the model universe behaves like a dark energy vacuum energy model. In the present scenario, the model evolves with a slow and uniform change of shape. It is observed that the universe is close to or nearly flat. The model is free from initial singularity and is predicted to approach the deSitter phase dominated by vacuum energy or cosmological constant in the finitetime future. A comprehensive discussion on the cosmological parameters obtained in view of the recent studies is presented in detail with graphs.
Learning Financial AssetSpecific Trading Rules via Deep Reinforcement Learning ; Generating assetspecific trading signals based on the financial conditions of the assets is one of the challenging problems in automated trading. Various asset trading rules are proposed experimentally based on different technical analysis techniques. However, these kind of trading strategies are profitable, extracting new assetspecific trading rules from vast historical data to increase total return and decrease the risk of portfolios is difficult for human experts. Recently, various deep reinforcement learning DRL methods are employed to learn the new trading rules for each asset. In this paper, a novel DRL model with various feature extraction modules is proposed. The effect of different input representations on the performance of the models is investigated and the performance of DRLbased models in different markets and asset situations is studied. The proposed model in this work outperformed the other stateoftheart models in learning single assetspecific trading rules and obtained a total return of almost 262 in two years on a specific asset while the best stateoftheart model get 78 on the same asset in the same time period.
Analysis of highorder velocity moments in a strained channel flow ; In the current study, model expressions for fifthorder velocity moments obtained from the truncated GramCharlier series expansions model for a turbulent flow field probability density function are validated using data from direct numerical simulation DNS of a planar turbulent flow in a strained channel. Simplicity of the model expressions, the lack of unknown coefficients, and their applicability to nonGaussian turbulent flows make this approach attractive to use for closing turbulent models based on the Reynoldsaveraged NavierStokes equations. The study confirms validity of the model expressions. It also shows that the imposed flow deformation improves an agreement between the model and DNS profiles for the fifthorder moments in the flow buffer zone including when the flow reverses its direction. The study reveals sensitivity of particularly odd velocity moments to the grid resolution. A new length scale is proposed as a criterion for the grid generation near the wall and in the other flow areas dominated by high mean velocity gradients when higherorder statistics have to be collected from DNS.
Towards Lefschetz thimbles in Sigma models, I ; We study two dimensional path integral Lefschetz thimbles, i.e. the possible path integration contours. Specifically, in the examples of the ON and bf CPN1 models, we find a large class of complex critical points of the sigma model actions which are relevant for the theory in finite volume at finite temperature, with various chemical potentials corresponding to the symmetries of the models. In this paper we discuss the case of the O2m and the bf CPN1 models in the sector of zero instanton charge, as well as some solutions of the O2m1 model. The bf CPN1model for all instanton charges and a more general class of solutions of the ONmodel with odd N will be discussed in the forthcoming paper.
BayesAdaptive Deep ModelBased Policy Optimisation ; We introduce a Bayesian deep modelbased reinforcement learning method RoMBRL that can capture model uncertainty to achieve sampleefficient policy optimisation. We propose to formulate the modelbased policy optimisation problem as a Bayesadaptive Markov decision process BAMDP. RoMBRL maintains model uncertainty via belief distributions through a deep Bayesian neural network whose samples are generated via stochastic gradient Hamiltonian Monte Carlo. Uncertainty is propagated through simulations controlled by sampled models and historybased policies. As beliefs are encoded in visited histories, we propose a historybased policy network that can be endtoend trained to generalise across history space and will be trained using recurrent TrustRegion Policy Optimisation. We show that RoMBRL outperforms existing approaches on many challenging control benchmark tasks in terms of sample complexity and task performance. The source code of this paper is also publicly available on httpsgithub.comthoboticsRoMBRL.
A Continuous Variable Born Machine ; Generative Modelling has become a promising use case for near term quantum computers. In particular, due to the fundamentally probabilistic nature of quantum mechanics, quantum computers naturally model and learn probability distributions, perhaps more efficiently than can be achieved classically. The Born machine is an example of such a model, easily implemented on near term quantum computers. However, in its original form, the Born machine only naturally represents discrete distributions. Since probability distributions of a continuous nature are commonplace in the world, it is essential to have a model which can efficiently represent them. Some proposals have been made in the literature to supplement the discrete Born machine with extra features to more easily learn continuous distributions, however, all invariably increase the resources required to some extent. In this work, we present the continuous variable Born machine, built on the alternative architecture of continuous variable quantum computing, which is much more suitable for modelling such distributions in a resourceminimal way. We provide numerical results indicating the models ability to learn both quantum and classical continuous distributions, including in the presence of noise.
Nowcasting Growth using Google Trends Data A Bayesian Structural Time Series Model ; This paper investigates the benefits of internet search data in the form of Google Trends for nowcasting real U.S. GDP growth in real time through the lens of mixed frequency Bayesian Structural Time Series BSTS models. We augment and enhance both model and methodology to make these better amenable to nowcasting with large number of potential covariates. Specifically, we allow shrinking state variances towards zero to avoid overfitting, extend the SSVS spike and slab variable selection prior to the more flexible normalinversegamma prior which stays agnostic about the underlying model size, as well as adapt the horseshoe prior to the BSTS. The application to nowcasting GDP growth as well as a simulation study demonstrate that the horseshoe prior BSTS improves markedly upon the SSVS and the original BSTS model with the largest gains in dense datageneratingprocesses. Our application also shows that a large dimensional set of search terms is able to improve nowcasts early in a specific quarter before other macroeconomic data become available. Search terms with high inclusion probability have good economic interpretation, reflecting leading signals of economic anxiety and wealth effects.
Helioseismic Modeling of Background Flows ; We present a 3dimensional 3D numerical solver of the linearized compressible Euler equations GALE Global Acoustic Linearized Euler, used to model acoustic oscillations throughout the solar interior. The governing equations are solved in conservation form on a fully global spherical mesh 0 le phi le 2pi, 0 le theta le pi, 0 le r le Rodot over a background state generated by the standard Solar Model S. We implement an efficient pseudospectral computational method to calculate the contribution of the compressible material derivative dyad to internal velocity perturbations, computing oscillations over arbitrary 3D background velocity fields. This model offers a foundation for a forwardmodeling approach, using helioseismology techniques to explore various regimes of internal mass flows. We demonstrate the efficacy of the numerical method presented in this paper by reproducing observed solar power spectra, showing rotational splitting due to differential rotation, and applying local helioseismology techniques to measure travel times created by a simple model of singlecell meridional circulation.
Exploring the Predictability of Cryptocurrencies via Bayesian Hidden Markov Models ; In this paper, we consider a variety of multistate Hidden Markov models for predicting and explaining the Bitcoin, Ether and Ripple returns in the presence of state regime dynamics. In addition, we examine the effects of several financial, economic and cryptocurrency specific predictors on the cryptocurrency return series. Our results indicate that the NonHomogeneous Hidden Markov NHHM model with four states has the best onestepahead forecasting performance among all competing models for all three series. The dominance of the predictive densities over the single regime random walk model relies on the fact that the states capture alternating periods with distinct return characteristics. In particular, the four state NHHM model distinguishes bull, bear and calm regimes for the Bitcoin series, and periods with different profit and risk magnitudes for the Ether and Ripple series. Also, conditionally on the hidden states, it identifies predictors with different linear and nonlinear effects on the cryptocurrency returns. These empirical findings provide important insight for portfolio management and policy implementation.
Learning Bayes Filter Models for Tactile Localization ; Localizing and tracking the pose of robotic grippers are necessary skills for manipulation tasks. However, the manipulators with imprecise kinematic models e.g. lowcost arms or manipulators with unknown world coordinates e.g. poor cameraarm calibration cannot locate the gripper with respect to the world. In these circumstances, we can leverage tactile feedback between the gripper and the environment. In this paper, we present learnable Bayes filter models that can localize robotic grippers using tactile feedback. We propose a novel observation model that conditions the tactile feedback on visual maps of the environment along with a motion model to recursively estimate the gripper's location. Our models are trained in simulation with selfsupervision and transferred to the real world. Our method is evaluated on a tabletop localization task in which the gripper interacts with objects. We report results in simulation and on a real robot, generalizing over different sizes, shapes, and configurations of the objects.
IGSQL Database Schema Interaction Graph Based Neural Model for ContextDependent TexttoSQL Generation ; Contextdependent texttoSQL task has drawn much attention in recent years. Previous models on contextdependent texttoSQL task only concentrate on utilizing historical user inputs. In this work, in addition to using encoders to capture historical information of user inputs, we propose a database schema interaction graph encoder to utilize historicalal information of database schema items. In decoding phase, we introduce a gate mechanism to weigh the importance of different vocabularies and then make the prediction of SQL tokens. We evaluate our model on the benchmark SParC and CoSQL datasets, which are two large complex contextdependent crossdomain texttoSQL datasets. Our model outperforms previous stateoftheart model by a large margin and achieves new stateoftheart results on the two datasets. The comparison and ablation results demonstrate the efficacy of our model and the usefulness of the database schema interaction graph encoder.
Weak Identification in Discrete Choice Models ; We study the impact of weak identification in discrete choice models, and provide insights into the determinants of identification strength in these models. Using these insights, we propose a novel test that can consistently detect weak identification in commonly applied discrete choice models, such as probit, logit, and many of their extensions. Furthermore, we demonstrate that when the null hypothesis of weak identification is rejected, Waldbased inference can be carried out using standard formulas and critical values. A Monte Carlo study compares our proposed testing approach against commonly applied weak identification tests. The results simultaneously demonstrate the good performance of our approach and the fundamental failure of using conventional weak identification tests for linear models in the discrete choice model context. Furthermore, we compare our approach against those commonly applied in the literature in two empirical examples married women labor force participation, and US food aid and civil conflicts.
The Role of Stochasticity in NoiseInduced Tipping Point Cascades A Master Equation Approach ; Tipping points have been shown to be ubiquitous, both in models and empirically in a range of physical and biological systems. The question of how tipping points cascade through systems has been less well studied and is an important one. A study of noiseinduced tipping, in particular, could provide key insights into tipping cascades. Here, we consider a specific example of a simple model system that could have cascading tipping points. This model consists of two interacting populations with underlying Allee effects and stochastic dynamics, in separate patches connected by dispersal, which can generate bistability. From an ecological standpoint, we look for rescue effects whereby one population can prevent the collapse of a second population. As a way to investigate the stochastic dynamics, we use an individualbased modeling approach rooted in chemical reaction network theory. Then, using continuoustime Markov chains and the theory of first passage times, we essentially approximate, or emulate, the original highdimensional model by a Markov chain with just four states, where each state corresponds to a combination of population thresholds. Analysis of this reduced model shows when the system is likely to recover, as well as when tipping cascades through the whole system.
condLSTMQ A novel deep learning model for predicting Covid19 mortality in fine geographical Scale ; Predictive models with a focus on different spatialtemporal scales benefit governments and healthcare systems to combat the COVID19 pandemic. Here we present the conditional Long ShortTerm Memory networks with Quantile output condLSTMQ, a wellperforming model for making quantile predictions on COVID19 death tolls at the county level with a twoweek forecast window. This fine geographical scale is a rare but useful feature in publicly available predictive models, which would especially benefit statelevel officials to coordinate resources within the state. The quantile predictions from condLSTMQ inform people about the distribution of the predicted death tolls, allowing better evaluation of possible trajectories of the severity. Given the scalability and generalizability of neural network models, this model could incorporate additional data sources with ease, and could be further developed to generate other useful predictions such as new cases or hospitalizations intuitively.
A New Paradigm for Water Level Regulation using Three Pond Model with Fuzzy Inference System for Run of River Hydropower Plant ; The energy generation of a run of river hydropower plant depends upon the flow of river and the variations in the water flow makes the energy production unreliable. This problem is usually solved by constructing a small pond in front of the run of river hydropower plant. However, changes in water level of conventional single pond model results in sags, surges and unpredictable power fluctuations. This work proposes three pond model instead of traditional single pond model. The volume of water in three ponds is volumetrically equivalent to the traditional single pond but it reduces the dependency of the run of river power plant on the flow of river. Moreover, three pond model absorbs the water surges and disturbances more efficiently. The three pond system, modeled as nonlinear hydraulic three tank system, is being applied with fuzzy inference system and standard PID based methods for smooth and efficient level regulation. The results of fuzzy inference system are acrosstheboard improved in terms of regulation and disturbances handling as compared to conventional PID controller.
Input Convex Neural Networks for Building MPC ; Model Predictive Control in buildings can significantly reduce their energy consumption. The cost and effort necessary for creating and maintaining first principle models for buildings make datadriven modelling an attractive alternative in this domain. In MPC the models form the basis for an optimization problem whose solution provides the control signals to be applied to the system. The fact that this optimization problem has to be solved repeatedly in realtime implies restrictions on the learning architectures that can be used. Here, we adapt Input Convex Neural Networks that are generally only convex for onestep predictions, for use in building MPC. We introduce additional constraints to their structure and weights to achieve a convex inputoutput relationship for multistep ahead predictions. We assess the consequences of the additional constraints for the model accuracy and test the models in a reallife MPC experiment in an apartment in Switzerland. In two fiveday cooling experiments, MPC with Input Convex Neural Networks is able to keep room temperatures within comfort constraints while minimizing cooling energy consumption.
Viable Curvaton Models from the fNL Parameter ; We show how to build a curvaton inflationary model motivated by scaledependent nonGaussianities of cosmological perturbations. In particular, we study the change of sign in the fNL parameter as a function of the curvaton field value at horizon crossing and identify it with the cosmic microwave background pivot scale. We devise a procedure to recover the curvaton model that provides the desired fNL parameter. We then present a concrete example of fNL and construct its parent model. We study the constraints applied to this model based on considerations taken on fNL. We show that the hemispherical asymmetry can also be used to constrain the scaledependence of fNL and the model parameters.
StructFormer Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling ; There are two major classes of natural language grammar the dependency grammar that models onetoone correspondences between words and the constituency grammar that models the assembly of one or several corresponded words. While previous unsupervised parsing methods mostly focus on only inducing one class of grammars, we introduce a novel model, StructFormer, that can simultaneously induce dependency and constituency structure. To achieve this, we propose a new parsing framework that can jointly generate a constituency tree and dependency graph. Then we integrate the induced dependency relations into the transformer, in a differentiable manner, through a novel dependencyconstrained selfattention mechanism. Experimental results show that our model can achieve strong results on unsupervised constituency parsing, unsupervised dependency parsing, and masked language modeling at the same time.
Investigating two superresolution methods for downscaling precipitation ESRGAN and CAR ; In an effort to provide optimal inputs to downstream modeling systems e.g., a hydrodynamics model that simulates the water circulation of a lake, we hereby strive to enhance resolution of precipitation fields from a weather model by up to 9x. We test two superresolution models the enhanced superresolution generative adversarial networks ESRGAN proposed in 2017, and the content adaptive resampler CAR proposed in 2020. Both models outperform simple bicubic interpolation, with the ESRGAN exceeding expectations for accuracy. We make several proposals for extending the work to ensure it can be a useful tool for quantifying the impact of climate change on local ecosystems while removing reliance on energyintensive, highresolution weather model simulations.
Relational Learning for Skill Preconditions ; To determine if a skill can be executed in any given environment, a robot needs to learn the preconditions for the skill. As robots begin to operate in dynamic and unstructured environments, precondition models will need to generalize to variable number of objects with different shapes and sizes. In this work, we focus on learning precondition models for manipulation skills in unconstrained environments. Our work is motivated by the intuition that many complex manipulation tasks, with multiple objects, can be simplified by focusing on less complex pairwise object relations. We propose an objectrelation model that learns continuous representations for these pairwise object relations. Our objectrelation model is trained completely in simulation, and once learned, is used by a separate precondition model to predict skill preconditions for real world tasks. We evaluate our precondition model on 3 different manipulation tasks sweeping, cutting, and unstacking. We show that our approach leads to significant improvements in predicting preconditions for all 3 tasks, across objects of different shapes and sizes.
SpatialTemporal Alignment Network for Action Recognition and Detection ; This paper studies how to introduce viewpointinvariant feature representations that can help action recognition and detection. Although we have witnessed great progress of action recognition in the past decade, it remains challenging yet interesting how to efficiently model the geometric variations in large scale datasets. This paper proposes a novel SpatialTemporal Alignment Network STAN that aims to learn geometric invariant representations for action recognition and action detection. The STAN model is very lightweighted and generic, which could be plugged into existing action recognition models like ResNet3D and the SlowFast with a very low extra computational cost. We test our STAN model extensively on AVA, Kinetics400, AVAKinetics, Charades, and CharadesEgo datasets. The experimental results show that the STAN model can consistently improve the state of the arts in both action detection and action recognition tasks. We will release our data, models and code.
Reciprocal Supervised Learning Improves Neural Machine Translation ; Despite the recent success on image classification, selftraining has only achieved limited gains on structured prediction tasks such as neural machine translation NMT. This is mainly due to the compositionality of the target space, where the faraway prediction hypotheses lead to the notorious reinforced mistake problem. In this paper, we revisit the utilization of multiple diverse models and present a simple yet effective approach named ReciprocalSupervised Learning RSL. RSL first exploits individual models to generate pseudo parallel data, and then cooperatively trains each model on the combined synthetic corpus. RSL leverages the fact that different parameterized models have different inductive biases, and better predictions can be made by jointly exploiting the agreement among each other. Unlike the previous knowledge distillation methods built upon a much stronger teacher, RSL is capable of boosting the accuracy of one model by introducing other comparable or even weaker models. RSL can also be viewed as a more efficient alternative to ensemble. Extensive experiments demonstrate the superior performance of RSL on several benchmarks with significant margins.
Concept Drift and Covariate Shift Detection Ensemble with Lagged Labels ; In model serving, having one fixed model during the entire often lifelong inference process is usually detrimental to model performance, as data distribution evolves over time, resulting in lack of reliability of the model trained on historical data. It is important to detect changes and retrain the model in time. The existing methods generally have three weaknesses 1 using only classification error rate as signal, 2 assuming ground truth labels are immediately available after features from samples are received and 3 unable to decide what data to use to retrain the model when change occurs. We address the first problem by utilizing six different signals to capture a wide range of characteristics of data, and we address the second problem by allowing lag of labels, where labels of corresponding features are received after a lag in time. For the third problem, our proposed method automatically decides what data to use to retrain based on the signals. Extensive experiments on structured and unstructured data for different type of data changes establish that our method consistently outperforms the stateoftheart methods by a large margin.
Fusing Context Into Knowledge Graph for Commonsense Question Answering ; Commonsense question answering QA requires a model to grasp commonsense and factual knowledge to answer questions about world events. Many prior methods couple language modeling with knowledge graphs KG. However, although a KG contains rich structural information, it lacks the context to provide a more precise understanding of the concepts. This creates a gap when fusing knowledge graphs into language modeling, especially when there is insufficient labeled data. Thus, we propose to employ external entity descriptions to provide contextual information for knowledge understanding. We retrieve descriptions of related concepts from Wiktionary and feed them as additional input to pretrained language models. The resulting model achieves stateoftheart result in the CommonsenseQA dataset and the best result among nongenerative models in OpenBookQA.
A modified SusceptibleInfectedRecovered model for observed underreported incidence data ; Fitting SusceptibleInfectedRecovered SIR models to incidence data is problematic when not all infected individuals are reported. Assuming an underlying SIR model with general but known distribution for the time to recovery, this paper derives the implied differentialintegral equations for observed incidence data when a fixed fraction of newly infected individuals are not observed. The parameters of the resulting system of differential equations are identifiable. Using these differential equations, we develop a stochastic model for the conditional distribution of current disease incidence given the entire past history of reported cases. We estimate the model parameters using Bayesian Markov Chain MonteCarlo sampling of the posterior distribution. We use our model to estimate the transmission rate and fraction of asymptomatic individuals for the current Coronavirus 2019 outbreak in eight American Countries the United States of America, Brazil, Mexico, Argentina, Chile, Colombia, Peru, and Panama, from January 2020 to May 2021. Our analysis reveals that consistently, about 4060 of the infections were not observed in the American outbreaks. The two exception are Mexico and Peru, with acute underreporting in Mexico.
Empirical Analysis of Unlabeled Entity Problem in Named Entity Recognition ; In many scenarios, named entity recognition NER models severely suffer from unlabeled entity problem, where the entities of a sentence may not be fully annotated. Through empirical studies performed on synthetic datasets, we find two causes of performance degradation. One is the reduction of annotated entities and the other is treating unlabeled entities as negative instances. The first cause has less impact than the second one and can be mitigated by adopting pretraining language models. The second cause seriously misguides a model in training and greatly affects its performances. Based on the above observations, we propose a general approach, which can almost eliminate the misguidance brought by unlabeled entities. The key idea is to use negative sampling that, to a large extent, avoids training NER models with unlabeled entities. Experiments on synthetic datasets and realworld datasets show that our model is robust to unlabeled entity problem and surpasses prior baselines. On wellannotated datasets, our model is competitive with the stateoftheart method.
A Comparative Analysis of the Ensemble Methods for Drug Design ; Quantitative structureactivity relationship QSAR is a computer modeling technique for identifying relationships between the structural properties of chemical compounds and biological activity. QSAR modeling is necessary for drug discovery, but it has many limitations. Ensemblebased machine learning approaches have been used to overcome limitations and generate reliable predictions. Ensemble learning creates a set of diverse models and combines them. In our comparative analysis, each ensemble algorithm was paired with each of the basic algorithms, but the basic algorithms were also investigated separately. In this configuration, 57 algorithms were developed and compared on 4 different datasets. Thus, a technique for complex ensemble method is proposed that builds diversified models and integrates them. The proposed individual models did not show impressive results as a unified model, but it was considered the most important predictor when combined. We assessed whether ensembles always give better results than individual algorithms. The Python code written to get experimental results in this article has been uploaded to Github httpsgithub.comrifqatComparativeAnalysis.
NoiseAssisted Quantum Autoencoder ; Quantum autoencoder is an efficient variational quantum algorithm for quantum data compression. However, previous quantum autoencoders fail to compress and recover highrank mixed states. In this work, we discuss the fundamental properties and limitations of the standard quantum autoencoder model in more depth, and provide an informationtheoretic solution to its recovering fidelity. Based on this understanding, we present a noiseassisted quantum autoencoder algorithm to go beyond the limitations, our model can achieve high recovering fidelity for general input states. Appropriate noise channels are used to make the input mixedness and output mixedness consistent, the noise setup is determined by measurement results of the trash system. Compared with the original quantum autoencoder model, the measurement information is fully used in our algorithm. In addition to the circuit model, we design a noiseassisted adiabatic model of quantum autoencoder that can be implemented on quantum annealers. We verified the validity of our methods through compressing the thermal states of transverse field Ising model and Werner states. For pure state ensemble compression, we also introduce a projected quantum autoencoder algorithm.
PANTHER Pathway Augmented Nonnegative Tensor factorization for HighERorder feature learning ; Genetic pathways usually encode molecular mechanisms that can inform targeted interventions. It is often challenging for existing machine learning approaches to jointly model genetic pathways higherorder features and variants atomic features, and present to clinicians interpretable models. In order to build more accurate and better interpretable machine learning models for genetic medicine, we introduce Pathway Augmented Nonnegative Tensor factorization for HighERorder feature learning PANTHER. PANTHER selects informative genetic pathways that directly encode molecular mechanisms. We apply genetically motivated constrained tensor factorization to group pathways in a way that reflects molecular mechanism interactions. We then train a softmax classifier for disease types using the identified pathway groups. We evaluated PANTHER against multiple stateoftheart constrained tensormatrix factorization models, as well as group guided and Bayesian hierarchical models. PANTHER outperforms all stateoftheart comparison models significantly p0.05. Our experiments on large scale Next Generation Sequencing NGS and wholegenome genotyping datasets also demonstrated wide applicability of PANTHER. We performed feature analysis in predicting disease types, which suggested insights and benefits of the identified pathway groups.
Learning Prediction Intervals for Model Performance ; Understanding model performance on unlabeled data is a fundamental challenge of developing, deploying, and maintaining AI systems. Model performance is typically evaluated using test sets or periodic manual quality assessments, both of which require laborious manual data labeling. Automated performance prediction techniques aim to mitigate this burden, but potential inaccuracy and a lack of trust in their predictions has prevented their widespread adoption. We address this core problem of performance prediction uncertainty with a method to compute prediction intervals for model performance. Our methodology uses transfer learning to train an uncertainty model to estimate the uncertainty of model performance predictions. We evaluate our approach across a wide range of drift conditions and show substantial improvement over competitive baselines. We believe this result makes prediction intervals, and performance prediction in general, significantly more practical for realworld use.
Duality between two generalized AubryAndre models with exact mobility edges ; A mobility edge ME in energy separating extended from localized states is a central concept in understanding various fundamental phenomena like the metalinsulator transition in disordered systems. In onedimensional quasiperiodic systems, there exist a few models with exact MEs, and these models are beneficial to provide exact understanding of ME physics. Here we investigate two widely studied models including exact MEs, one with an exponential hopping and one with a special form of incommensurate onsite potential. We analytically prove that the two models are mutually dual, and further give the numerical verification by calculating the inverse participation ratio and Husimi function. The exact MEs of the two models are also obtained by calculating the localization lengths and using the duality relations. Our result may provide insight into realizing and observing exact MEs in both theory and experiment.
Probing SmallScale Power Spectra with Pulsar Timing Arrays ; Models of Dark Matter DM can leave unique imprints on the Universe's small scale structure by boosting density perturbations on small scales. We study the capability of Pulsar Timing Arrays to search for, and constrain, subhalos from such models. The models of DM we consider are ordinary adiabatic perturbations in LambdaCDM, QCD axion miniclusters, models with early matter domination, and vector DM produced during inflation. We show that LambdaCDM, largely due to tidal stripping effects in the Milky Way, is out of reach for PTAs as well as every other probe proposed to detect DM small scale structure. Axion miniclusters may be within reach, although this depends crucially on whether the axion relic density is dominated by the misalignment or string contribution. Models where there is matter domination with a reheat temperature below 1 GeV may be observed with future PTAs. Lastly, vector DM produced during inflation can be detected if it is lighter than 1016 ,rm GeV. We also make publicly available a Python Monte Carlo tool for generating the PTA time delay signal from any model of DM substructure.
Threedimensional modelling of accretion columns spatial asymmetry and selfconsistent simulations ; The paper presents the results of threedimensional 3D modelling of the structure and the emission of accretion columns formed above the surface of accreting strongly magnetized neutron stars under the circumstances when a pressure of the photons generated in the column base is enough to determine the dynamics of the plasma flow. On the foundation of numerical radiation hydrodynamic simulations, several 3D models of accretion column are constructed. The first group of the models contains spatially 3D columns. The corresponding calculations lead to the distributions of the radiation flux over the sidewalls of the columns which are not characterized by axial symmetry. The second group includes the selfconsistent modelling of spectral radiative transfer and twodimensional spatial structure of the column, with both thermal and bulk Comptonization taken into account. The changes in the structure of the column and the shape of Xray continuum are investigated depending on physical parameters of the model.
Bambi A simple interface for fitting Bayesian linear models in Python ; The popularity of Bayesian statistical methods has increased dramatically in recent years across many research areas and industrial applications. This is the result of a variety of methodological advances with faster and cheaper hardware as well as the development of new software tools. Here we introduce an open source Python package named Bambi BAyesian Model Building Interface that is built on top of the PyMC probabilistic programming framework and the ArviZ package for exploratory analysis of Bayesian models. Bambi makes it easy to specify complex generalized linear hierarchical models using a formula notation similar to those found in R. We demonstrate Bambi's versatility and ease of use with a few examples spanning a range of common statistical models including multiple regression, logistic regression, and mixedeffects modeling with crossed group specific effects. Additionally we discuss how automatic priors are constructed. Finally, we conclude with a discussion of our plans for the future development of Bambi.
Dynamical heterogeneities in nonentangled polystyrene and polyethylene oxide star melts ; Star polymers can exhibit a heterogeneous dynamical behavior due to their internal structure. In this work we employ atomistic molecular dynamics simulations to study translational motion in nonentangled polystyrene and polyethylene oxide starshaped melts. We focus on the local heterogeneous dynamics originating from the multiarm starlike architecture and quantify the intramolecular dynamical gradient. By examining the translational motion at length scales of the order of the Kuhn length, we aim to find common features for both studied chemistries and to provide a critical and direct comparison with theoretical models of polymer dynamics. We discuss the observed tendencies with respect to the continuous Rouse model adjusted for the starlike architectures. Two versions of the Rouse model are examined one assuming uniform friction on every Rouse bead and another one considering larger branch point friction. Apart from chain connectivity between neighboring beads, both versions disregard interactions between the chains. Despite the tolerable description of the simulation data, neither model appears to reflect the mobility gradient accurately. The detailed quantitative atomistic models employed here bridge the gap between the theoretical and general, coarsegranined models of starlike polymers which lack the indispensable chemical details.
A transport approach to relate asymmetric protein segregation and population growth ; Many unicellular organisms allocate their key proteins asymmetrically between the mother and daughter cells, especially in a stressed environment. A recent theoretical model is able to predict when the asymmetry in segregation of key proteins enhances the population fitness, extrapolating the solution at two limits where the segregation is perfectly asymmetric asymmetry a 1 and when the asymmetry is small 0 leq a ll 1. We generalize the model by introducing stochasticity and use a transport equation to obtain a selfconsistent equation for the population growth rate and the distribution of the amount of key proteins. We provide two ways of solving the selfconsistent equation numerically by updating the solution for the selfconsistent equation iteratively and analytically by expanding moments of the distribution. With these more powerful tools, we can extend the previous model by Lin et al. to include stochasticity to the segregation asymmetry. We show the stochastic model is equivalent to the deterministic one with a modified effective asymmetry parameter arm eff. We discuss the biological implication of our models and compare with other theoretical models.
A multifield tachyon quintom model of dark energy and fate of the universe ; We investigate a multifield model of dark energy in this paper. We develop a model of dark energy with two multiple scalar fields, one we consider, is a multifield tachyon and the other is multifield phantom tachyon scalars. We make an analysis of the system in phase space by considering inverse square potentials suitable for these models. Through the development of an autonomous dynamical system, the critical points and their stability analysis is performed. It has been observed that these stable critical points are satisfied by power law solutions. Moving on towards the analysis we can predict the fate of the universe. A special feature of this model is that it affects the equation of state parameter w to alter from being it greater than negative one to be less than it during the evolutionary phase of the universe. Thus, its all about the phantom divide which turns out to be decisive in the evolution of the cosmos in these models.
Persistence in black hole lattice cosmological models ; Dynamical solutions for an evolving multiple network of black holes near a cosmological bounce dominated by a scalar field are investigated. In particular, we consider the class of black hole lattice models in a hyperspherical cosmology, and we focus on the special case of eight regularlyspaced black holes with equal masses when the model parameter kappa 1. We first derive exact time evolving solutions of instantaneouslystatic models, by utilizing perturbative solutions of the constraint equations that can then be used to develop exact 4D dynamical solutions of the Einstein field equations. We use the notion of a geometric horizon, which can be characterized by curvature invariants, to determine the black hole horizon. We explicitly compute the invariants for the exact dynamical models obtained. As an application, we discuss whether black holes can persist in such a universe that collapses and then subsequently bounces into a new expansionary phase. We find evidence that in the physical models under investigation and particularly for kappa 1 the individual black holes do not merge before nor at the bounce, so that consequently black holes can indeed persist through the bounce.
An analytical anisotropic compact stellar model of embedding class I ; A class of solutions of Einstein field equations satisfying Karmarkar embedding condition is presented which could describe static, spherical fluid configurations, and could serve as models for compact stars. The fluid under consideration has unequal principal stresses i.e. fluid is locally anisotropic. A certain physically motivated geometry of metric potential has been chosen and codependency of the metric potentials outlines the formation of the model. The exterior spacetime is assumed as described by the exterior Schwarzschild solution. The smooth matching of the interior to the exterior Schwarzschild spacetime metric across the boundary and the condition that radial pressure is zero across the boundary lead us to determine the model parameters. Physical requirements and stability analysis of the model demanded for a physically realistic star are satisfied. The developed model has been investigated graphically by exploring data from some of the known compact objects. The massradius MR relationship that shows the maximum mass admissible for observed pulsars for a given surface density has also been investigated. Moreover, the physical profile of the moment of inertia I thus obtained from the solutions is confirmed by the BejgerHaensel concept.
ActivePassive Brownian Particle in Two Dimensions ; This paper presents a model for active particles in two dimensions with timedependent selfpropulsion speed undergoing both translational and rotational diffusion. Usually, for modeling the motion of active particles, the selfpropulsion speed is assumed to be constant as in the famous model of active Brownian motion. This assumption is far from what may happen in reality. Here, we generalize active Brownian motion by considering stochastic selfpropulsion speed vt. In particular, we assume that vt is a twostate process with v0 passive state and vs active state. The transition between the two states is also modeled using the random telegraph process. It is expected that the presented twostate model where we call it activepassive Brownian particle has the characteristics of both pure active and pure passiveBrownian particle. The analytical results for the first two moments of displacement and the effective diffusion coefficient confirm this expectation. We also show that a runandtumble particle such as a motile bacterium can be mapped to our model so that their diffusivities at large scales are equal.
CoCoLM COmplex COmmonsense Enhanced Language Model with Discourse Relations ; Largescale pretrained language models have demonstrated strong knowledge representation ability. However, recent studies suggest that even though these giant models contains rich simple commonsense knowledge e.g., bird can fly and fish can swim., they often struggle with the complex commonsense knowledge that involves multiple eventualities verbcentric phrases, e.g., identifying the relationship between Jim yells at Bob'' and Bob is upset''.To address this problem, in this paper, we propose to help pretrained language models better incorporate complex commonsense knowledge. Different from existing finetuning approaches, we do not focus on a specific task and propose a general language model named CoCoLM. Through the careful training over a largescale eventuality knowledge graphs ASER, we successfully teach pretrained language models i.e., BERT and RoBERTa rich complex commonsense knowledge among eventualities. Experiments on multiple downstream commonsense tasks that requires the correct understanding of eventualities demonstrate the effectiveness of CoCoLM.
Robust DataDriven Error Compensation for a Battery Model ; This work has been submitted to IFAC for possible publication Models of traction batteries are an essential tool throughout the development of automotive drivetrains. Surprisingly, today's massively collected battery data is not yet used for more accurate and reliable simulations. Primarily, the nonuniform excitation during regular battery operations prevent a consequent utilization of such measurements. Hence, there is a need for methods which enable robust models based on large datasets. For that reason, a datadriven error model is introduced enhancing an existing physically motivated model. A neural network compensates the existing dynamic error and is further limited based on a description of the underlying data. This paper tries to verify the effectiveness and robustness of the general setup and additionally evaluates a oneclass support vector machine as the proposed model for the training data distribution. Based on a five datasets it is shown, that gradually limiting the datadriven error compensation outside the boundary leads to a similar improvement and an increased overall robustness.
Noboundary Wave Function, WheelerDeWitt Equation and Path Integral Analysis of the Bouncing Quantum' Cosmology ; Bouncing models are alternatives to inflationary cosmology that replace the initial BigBang singularity by a bouncing' phase. A deeper understanding of the initial conditions of the universe, in these scenarios, requires knowledge of quantum aspects of bouncing models. In this work, we propose two classes of bouncing models that can be studied with great analytical ease and hence, provide testbed for investigating more profound problems in quantum cosmology of bouncing universes. Our model's two key ingredients enable us to do straightforward analytical calculations i a convenient parametrization of the minisuperspace of FRLW spacetimes and ii two distinct choices of the effective perfect fluids that source the background geometry of the bouncing universe. We study the quantum cosmology of these models using both the Wheelerde Witt equations and the path integral approach. In particular, we found a bouncing model analogue of the noboundary wavefunction and presented a Lorentzian path integral representation for the same. We also discuss the introduction of real scalar perturbations.
Observational Constraints on the Cosmology with Holographic Dark Fluid ; We consider the holographic FriedmanRobertsonWalker hFRW universe on the 4dimensional membrane embedded in the 5dimensional bulk spacetime and fit the parameters with the observational data. In order to fully account for the phenomenology of this scenario, we consider the models with the brane cosmological constant and the negative bulk cosmological constant. The contribution from the bulk is represented as the holographic dark fluid on the membrane. We derive the universal modified Friedmann equation by including all of these effects in both braneworld and holographic cutoff approaches. For three specific models, namely, the pure hFRW model, the one with the brane cosmological constant, and the one with the negative bulk cosmological constant, we compare the model predictions with the observations. The parameters in the considered hFRW models are constrained with observational data. In particular, it is shown that the model with the brane cosmological constant can fit data as well as the standard LambdaCDM universe. We also find that the sigma8 tension observed in different largestructure experiments can be effectively relaxed in this holographic scenario.
A lattice Boltzmann model for the coupled crossdiffusionfluid system ; In this paper, we propose a lattice Boltzmann LB model for the generalized coupled crossdiffusionfluid system. Through the direct Taylor expansion method, the proposed LB model can correctly recover the macroscopic equations. The cross diffusion terms in the coupled system are modeled by introducing additional collision operators, which can be used to avoid special treatments for the gradient terms. In addition, the auxiliary source terms are constructed properly such that the numerical diffusion caused by the convection can be eliminated. We adopt the developed LB model to study two important systems, i.e., the coupled chemotaxisfluid system and the doublediffusive convection system with Soret and Dufour effects. We first test the present LB model through considering a steadystate case of coupled chemotaxisfluid system, then we analyze the influences of some physical parameters on the formation of sinking plumes. Finally, the doublediffusive natural convection system with Soret and Dufour effects is also studied, and the numerical results agree well with some previous works.
How to Train Your EnergyBased Models ; EnergyBased Models EBMs, also known as nonnormalized probabilistic models, specify probability density or mass functions up to an unknown normalizing constant. Unlike most other probabilistic models, EBMs do not place a restriction on the tractability of the normalizing constant, thus are more flexible to parameterize and can model a more expressive family of probability distributions. However, the unknown normalizing constant of EBMs makes training particularly difficult. Our goal is to provide a friendly introduction to modern approaches for EBM training. We start by explaining maximum likelihood training with Markov chain Monte Carlo MCMC, and proceed to elaborate on MCMCfree approaches, including Score Matching SM and Noise Constrastive Estimation NCE. We highlight theoretical connections among these three approaches, and end with a brief survey on alternative training methods, which are still under active research. Our tutorial is targeted at an audience with basic understanding of generative models who want to apply EBMs or start a research project in this direction.
Reachability Analysis for Attributes in ABAC with Group Hierarchy ; Attributebased access control ABAC models are widely used to provide finegrained and adaptable authorization based on the attributes of users, resources, and other relevant entities. Hierarchial group and attribute based access control HGABAC model was recently proposed which introduces the novel notion of attribute inheritance through group membership. GURAG was subsequently proposed to provide an administrative model for user attributes in HGABAC, building upon the ARBAC97 and GURA administrative models. The GURA model uses administrative roles to manage user attributes. The reachability problem for the GURA model is to determine what attributes a particular user can acquire, given a predefined set of administrative rules. This problem has been previously analyzed in the literature. In this paper, we study the user attribute reachability problem based on directly assigned attributes of the user and attributes inherited via group memberships. We first define a restricted form of GURAG, called rGURAG scheme, as a state transition system with multiple instances having different preconditions and provide reachability analysis for each of these schemes. In general, we show PSPACEcomplete complexity for all rGURAG schemes. We further present polynomial time algorithms to solve special instances of rGURAG schemes under restricted conditions.
Automatic Polyp Segmentation using Fully Convolutional Neural Network ; Colorectal cancer is one of fatal cancer worldwide. Colonoscopy is the standard treatment for examination, localization, and removal of colorectal polyps. However, it has been shown that the missrate of colorectal polyps during colonoscopy is between 6 to 27. The use of an automated, accurate, and realtime polyp segmentation during colonoscopy examinations can help the clinicians to eliminate missing lesions and prevent further progression of colorectal cancer. The Medico automatic polyp segmentation challenge'' provides an opportunity to study polyp segmentation and build a fast segmentation model. The challenge organizers provide a KvasirSEG dataset to train the model. Then it is tested on a separate unseen dataset to validate the efficiency and speed of the segmentation model. The experiments demonstrate that the model trained on the KvasirSEG dataset and tested on an unseen dataset achieves a dice coefficient of 0.7801, mIoU of 0.6847, recall of 0.8077, and precision of 0.8126, demonstrating the generalization ability of our model. The model has achieved 80.60 FPS on the unseen dataset with an image resolution of 512 times 512.
Three Dimensional MR Image Synthesis with Progressive Generative Adversarial Networks ; Mainstream deep models for threedimensional MRI synthesis are either crosssectional or volumetric depending on the input. Crosssectional models can decrease the model complexity, but they may lead to discontinuity artifacts. On the other hand, volumetric models can alleviate the discontinuity artifacts, but they might suffer from loss of spatial resolution due to increased model complexity coupled with scarce training data. To mitigate the limitations of both approaches, we propose a novel model that progressively recovers the target volume via simpler synthesis tasks across individual orientations.
Testing Models of Strategic Uncertainty Equilibrium Selection in Repeated Games ; In repeatedgame applications where both the collusive and noncollusive outcomes can be supported as equilibria, researchers must resolve underlying selection questions if theory will be used to understand counterfactual policies. One guide to selection, based on clear theoretical underpinnings, has shown promise in predicting when collusive outcomes will emerge in controlled repeatedgame experiments. In this paper we both expand upon and experimentally test this model of selection, and its underlying mechanism strategic uncertainty. Adding an additional source of strategic uncertainty the number of players to the morestandard payoff sources, we stress test the model. Our results affirm the model as a tool for predicting when tacit collusion is likelyunlikely to be successful. Extending the analysis, we corroborate the mechanism of the model. When we remove strategic uncertainty through an explicit coordination device, the model no longer predicts the selected equilibrium.
Membership Inference Attack on Graph Neural Networks ; Graph Neural Networks GNNs, which generalize traditional deep neural networks on graph data, have achieved stateoftheart performance on several graph analytical tasks. We focus on how trained GNN models could leak information about the emphmember nodes that they were trained on. We introduce two realistic settings for performing a membership inference MI attack on GNNs. While choosing the simplest possible attack model that utilizes the posteriors of the trained model blackbox access, we thoroughly analyze the properties of GNNs and the datasets which dictate the differences in their robustness towards MI attack. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor. We support our findings by extensive experiments on four representative GNN models. To prevent MI attacks on GNN, we propose two effective defenses that significantly decreases the attacker's inference by up to 60 without degradation to the target model's performance. Our code is available at httpsgithub.comiyempissyrebMIGraph.
decay halflives of superheavy nuclei with Z122125 ; For alpha decay halflife calculations in this work, the Coulomb and proximity potential model with a new semiempirical formula for diffuseness parameter developed in previous work Phys. Rev. C 100, 024601 2019 is used. The present model in this work is compared with the generalized liquiddrop model GLDM, universal decay law UDL, and experimental halflives in the region Z104118. Next, the predicted halflives of 51 superheavy nuclei SHN with Z122125 by the present model are compared with those of GLDM, and UDL. The present model is revealed to be more accurate in reproducing experimental halflives compared to GLDM and UDL. Moreover, it is found that the predictions of the present model and UDL are highly consistent while GLDM largely deviates from the other two. A study of the competition between alpha decay and spontaneous fission SF shows that alpha decay is the dominant mode. Among the studied SHN with Z122125, 295307122 and 314320125 are identified as potential candidates whose halflives are relatively long enough to be experimentally detected in the future through their alphadecay chains. The identified candidates are in good agreement with other recent work.
Twophase approaches to optimal modelbased design of experiments how many experiments and which ones ; Modelbased experimental design is attracting increasing attention in chemical process engineering. Typically, an iterative procedure is pursued an approximate model is devised, prescribed experiments are then performed and the resulting data is exploited to refine the model. To help to reduce the cost of trialanderror approaches, strategies for modelbased design of experiments suggest experimental points where the expected gain in information for the model is the largest. It requires the resolution of a large nonlinear, generally nonconvex, optimization problem, whose solution may greatly depend on the starting point. We present two discretization strategies that can assist the experimenter in setting the number of relevant experiments and performing an optimal selection, and we compare them against two patternbased strategies that are independent of the problem. The validity of the approaches is demonstrated on an academic example and two test problems from chemical engineering including a vapor liquid equilibrium and reaction kinetics.
Sound Speed in Extended Chaplygin Fluid ; We consider an extended Chaplygin gas equation of state which is driven from Dbrane action and construct a cosmological model based on this equation of state. In this regard, we compute the scale factor of the model under a certain approximation. The conservation equation of this case is a nonlinear differential equation which should solve using the special conditions. We also analyze the stability of the model by using sound speed as well as adiabatic index and discuss certain special cases of the model. We find special equation of state in this model which yields to dynamical and thermodynamical stability. Furthermore, we study the cosmological consequences of this model under certain conditions.
Named Entity Recognition in the Style of Object Detection ; In this work, we propose a twostage method for named entity recognition NER, especially for nested NER. We borrowed the idea from the twostage Object Detection in computer vision and the way how they construct the loss function. First, a region proposal network generates region candidates and then a secondstage model discriminates and classifies the entity and makes the final prediction. We also designed a special loss function for the secondstage training that predicts the entityness and entity type at the same time. The model is built on top of pretrained BERT encoders, and we tried both BERT base and BERT large models. For experiments, we first applied it to flat NER tasks such as CoNLL2003 and OntoNotes 5.0 and got comparable results with traditional NER models using sequence labeling methodology. We then tested the model on the nested named entity recognition task ACE2005 and Genia, and got F1 score of 85.6 and 76.8 respectively. In terms of the secondstage training, we found that adding extra randomly selected regions plays an important role in improving the precision. We also did error profiling to better evaluate the performance of the model in different circumstances for potential improvements in the future.
Role of Water Model on Ion Dissociation at Ambient Conditions ; We study ion pair dissociation in water at ambient conditions using a combination of classical and ab initio approaches. The goal of this study is to disentangle the sources of discrepancy observed in computed potentials of mean force. In particular we aim to understand why some models favor the stability of solventseparated ion pairs versus contact ion pairs. We found that some observed differences can be explained by nonconverged simulation parameters. However, we also unveil that for some models, small changes in the solution density can have significant effects on modifying the equilibrium balance between the two configurations. We conclude that the thermodynamic stability of contact and solventseparated ion pairs is very sensitive to the dielectric properties of the underlying simulation model. In general, classical models are very robust on providing a similar estimation of the contact ion pair stability, while this is much more variable in density functional theorybased models. The barrier to transition from solventseparated to contact ion pair is fundamentally dependent on the balance between electrostatic potential energy and entropy. This reflects the importance of water intra and intermolecular polarizability in obtaining an accurate description of the screened ionion interactions.
Learning Reasoning Paths over Semantic Graphs for Videogrounded Dialogues ; Compared to traditional visual question answering, videogrounded dialogues require additional reasoning over dialogue context to answer questions in a multiturn setting. Previous approaches to videogrounded dialogues mostly use dialogue context as a simple text input without modelling the inherent information flows at the turn level. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context PDC. PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer. PDC model then learns to predict reasoning paths over this semantic graph. Our path prediction model predicts a path from the current turn through past dialogue turns that contain additional visual cues to answer the current question. Our reasoning model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer. Our experimental results demonstrate the effectiveness of our method and provide additional insights on how models use semantic dependencies in a dialogue context to retrieve visual cues.
Structural models for policymaking Coping with parametric uncertainty ; The exante evaluation of policies using structural econometric models is based on estimated parameters as a standin for the true parameters. This practice ignores uncertainty in the counterfactual policy predictions of the model. We develop a generic approach that deals with parametric uncertainty using uncertainty sets and frames modelinformed policymaking as a decision problem under uncertainty. The seminal human capital investment model by Keane and Wolpin 1997 provides a wellknown, influential, and empiricallygrounded test case. We document considerable uncertainty in the models's policy predictions and highlight the resulting policy recommendations obtained from using different formal rules of decisionmaking under uncertainty.
Assessment of Soot Formation Models in Lifted EthyleneAir Turbulent Diffusion Flame ; In the present study, soot formation in the turbulent lifted diffusion flame, consisting of ethyleneair is numerically investigated using three different soot modeling approaches and is comprehensively reported. For turbulencechemistry interaction, Flamelet generated manifold FGM model is used. A detailed kinetics is used which is represented through POLIMI mechanism Ranzi et al. 2012. Soot formation is modeled using two different approaches, semiempirical twoequation approach and Quadrature methods of moments approach, where both the approaches consider various subprocesses such as nucleation, coagulation, surface growth and oxidation. The radiation heat transfer is taken into account considering four fictitious gasses in conjunction with the weightedsumofgray gas WSSGM approach for modeling absorption coefficient. The experimental and earlier published numerical data from Kohler et al. 2012 and Blacha et al. 2011 are used for assessment of different soot modeling approaches. The discrepancies between numerical and experimental data are observed due to underprediction of OH radicals concentration and poor fuelair mixing ratios in the vicinity of the fuel jet region leading to early soot formation and the trends are unaffected after invoking radiation.
YangBaxter deformations of WZW model on the Heisenberg Lie group ; The YangBaxter YB deformations of WessZuminoWitten WZW model on the Heisenberg Lie group H4 are examined. We proceed to obtain the nonequivalent solutions of modified classical YangBaxter equation mCYBE for the h4 Lie algebra by using its corresponding automorphism transformation. Then we show that YB deformations of H4 WZW model are split into ten nonequivalent backgrounds including metric and Bfield such that some of the metrics of these backgrounds can be transformed to the metric of H4 WZW model while the antisymmetric Bfields are changed. The rest of the deformed metrics have a different isometric group structure than the H4 WZW model metric. As an interesting result, it is shown that all new integrable backgrounds of the YB deformed H4 WZW model are conformally invariant up to twoloop order. In this way, we obtain the general form of the dilaton fields satisfying the vanishing betafunction equations of the corresponding sigmamodels.
Modeling of accelerating Universe with bulk viscous fluid in Bianchi V spacetime ; In this paper, we have investigated a bulk viscous anisotropic Universe and constrained its model parameters with recent Hz and Pantheon compilation data. Using cosmic chronometric technique, we estimate the present value of Hubble's constant as H0 69.39 pm 1.54kms1Mpc1, 70.016 pm 1.65kms1Mpc1 and 69.36 pm 1.42kms1Mpc1 by bounding our derived model with recent Hz data, Pantheon and joint Hz and Pantheon data respectively. The present age of the Universe is specified as t0 0.9796H01sim 13.79 Gyrs. The model favours a transitioning Universe with the transition redshift as zt 0.73. We have reconstructed the jerk parameter using the observational data sets. From the analysis of the jerk parameter, it is observed that, our derived model shows a marginal departure from the concordance LambdaCDM model.
Importance Sampling with the Integrated Nested Laplace Approximation ; The Integrated Nested Laplace Approximation INLA is a deterministic approach to Bayesian inference on latent Gaussian models LGMs and focuses on fast and accurate approximation of posterior marginals for the parameters in the models. Recently, methods have been developed to extend this class of models to those that can be expressed as conditional LGMs by fixing some of the parameters in the models to descriptive values. These methods differ in the manner descriptive values are chosen. This paper proposes to combine importance sampling with INLA ISINLA, and extends this approach with the more robust adaptive multiple importance sampling algorithm combined with INLA AMISINLA. This paper gives a comparison between these approaches and existing methods on a series of applications with simulated and observed datasets and evaluates their performance based on accuracy, efficiency, and robustness. The approaches are validated by exact posteriors in a simple bivariate linear model; then, they are applied to a Bayesian lasso model, a Bayesian imputation of missing covariate values, and lastly, in parametric Bayesian quantile regression. The applications show that the AMISINLA approach, in general, outperforms the other methods, but the ISINLA algorithm could be considered for faster inference when good proposals are available.
The rigidbeam model for simulating plasmas generated by intense electron beams ; We introduce a simplified model of the electronbeamplasma system to model the electrical breakdown caused by the inductive electric field created by a rapidly rising electron beam current. The rigidbeam model is a reduction to the problem geometry to cylindrical coordinated and simplifications to Maxwell's equations that are driven by a prescribed electron beam current density. The model is very convenient for comparing various reductions of the plasma dynamics and air chemistry equation while maintaining a good approximation to the overall magnitude of the beamcreated electric field. The usefulness of this model is demonstrated by comparing results for two different fluid reductions of the plasma dynamics one where the collision rates are computed from the local reduced electric field Ep and another where the collision rates are determined from the mean energy per particle. We find that the two methods give similar results at higher pressures where the energy relation rate is large but differs significantly at lower pressures where the characteristic inelastic energy loss time scale is comparable to or greater than the rise time of the electron beam current.
Efficiency gains of a multiscale integration method applied to a scaleseparated model for rapidly rotating dynamos ; Numerical geodynamo simulations with parameters close to an Earthlike regime would be of great interest for understanding the dynamics of the Earth's liquid outer core and the associated geomagnetic field. Such simulations are far too computationally demanding owing to the large range in spatiotemporal scales. This paper explores the application of a multiscale timestepping method to an asymptotic model for the generation of magnetic field in the fluid outer core of the Earth. The method is based on the heterogeneous multiscale modelling HMM strategy, which incorporates scale separation and utilizes several integrating models for the fast and slow fields. Quantitative comparisons between the multiscale simulations and direct solution of the asymptotic model in the limit of rapid rotation and low Ekman number are performed. The multiscale method accurately captures the varying temporal and spatial dynamics of the mean magnetic field at lower computational costs compared to the direct solution of the asymptotic model.
Modeling Web Browsing Behavior across Tabs and Websites with Tracking and Prediction on the Client Side ; Clickstreams on individual websites have been studied for decades to gain insights into user interests and to improve website experiences. This paper proposes and examines a novel sequence modeling approach for web clickstreams, that also considers multitab branching and backtracking actions across websites to capture the full action sequence of a user while browsing. All of this is done using machine learning on the client side to obtain a more comprehensive view and at the same time preserve privacy. We evaluate our formalism with a model trained on data collected in a user study with three different browsing tasks based on different human information seeking strategies from psychological literature. Our results show that the model can successfully distinguish between browsing behaviors and correctly predict future actions. A subsequent qualitative analysis identified five common web browsing patterns from our collected behavior data, which help to interpret the model. More generally, this illustrates the power of overparameterization in ML and offers a new way of modeling, reasoning with, and prediction of observable sequential human interaction behaviors.
Adversarial Machine Learning Security Problems for 6G mmWave Beam Prediction UseCase ; 6G is the next generation for the communication systems. In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. The predictive algorithms will be used in 6G problems. With the rapid developments of deep learning techniques, it is critical to take the security concern into account to apply the algorithms. While machine learning offers significant advantages for 6G, AI models' security is ignored. Since it has many applications in the real world, security is a vital part of the algorithms. This paper has proposed a mitigation method for adversarial attacks against proposed 6G machine learning models for the millimeterwave mmWave beam prediction with adversarial learning. The main idea behind adversarial attacks against machine learning models is to produce faulty results by manipulating trained deep learning models for 6G applications for mmWave beam prediction use case. We have also presented the adversarial learning mitigation method's performance for 6G security in millimeterwave beam prediction application with fast gradient sign method attack. The mean square errors of the defended model and undefended model are very close.
Skeletal Model Reduction with Forced Optimally Time Dependent Modes ; Sensitivity analysis with forced optimally time dependent fOTD modes is introduced and its application for skeletal model reduction is demonstrated. fOTD expands the sensitivity coefficient matrix into a lowdimensional, time dependent, orthonormal basis which captures directions of the phase space associated with most dominant sensitivities. These directions highlight the instantaneous active species and reaction paths. Evolution equations for the orthonormal basis and the projections of sensitivity matrix onto the basis are derived, and the application of fOTD for skeletal reduction is described. In this framework, the sensitivity matrix is modeled, stored in a factorized manner, and never reconstructed at any time during the calculations. For demonstration purposes, sensitivity analysis is conducted of constant pressure ethyleneair burning in a zerodimensional reactor and new skeletal models are generated. The flame speed, the ignition delay, and the extinction curve of resulted models are compared against some of the existing skeletal models. The results demonstrate the ability of fOTD approach to eliminate unimportant reactions and species in a systematic, efficient and accurate manner.
Towards a question answering assistant for software development using a transformerbased language model ; Question answering platforms, such as Stack Overflow, have impacted substantially how developers search for solutions for their programming problems. The crowd knowledge content available from such platforms has also been used to leverage software development tools. The recent advances on Natural Language Processing, specifically on more powerful language models, have demonstrated ability to enhance text understanding and generation. In this context, we aim at investigating the factors that can influence on the application of such models for understanding source code related data and produce more interactive and intelligent assistants for software development. In this preliminary study, we particularly investigate if a howto question filter and the level of context in the question may impact the results of a question answering transformerbased model. We suggest that finetuning models with corpus based on howto questions can impact positively in the model and more contextualized questions also induce more objective answers.
Learning Descriptor of Constrained Task from Demonstration ; Constrained objects, such as doors and drawers are often complex and share a similar structure in the human environment. A robot needs to interact accurately with constrained objects to safely and successfully complete a task. Learning from Demonstration offers an appropriate path to learn the object structure of the demonstration for unknown objects for unknown tasks. There is work that extracts the kinematic model from motion. However, the gap remains when the robot faces a new object with a similar model but different contexts, e.g. size, appearance, etc. In this paper, we propose a framework that integrates all the information needed to learn a constrained motion from a depth camera into a descriptor of the constrained task. The descriptor consists of object information, grasping point model, constrained model, and reference frame model. By associating constrained learning and reference frame with the constrained object, we demonstrate that the robot can learn the book opening model and parameter of the constraints from demonstration and generalize to novel books.
Investigating Monolingual and Multilingual BERTModels for Vietnamese Aspect Category Detection ; Aspect category detection ACD is one of the challenging tasks in the Aspectbased sentiment Analysis problem. The purpose of this task is to identify the aspect categories mentioned in usergenerated reviews from a set of predefined categories. In this paper, we investigate the performance of various monolingual pretrained language models compared with multilingual models on the Vietnamese aspect category detection problem. We conduct the experiments on two benchmark datasets for the restaurant and hotel domain. The experimental results demonstrated the effectiveness of the monolingual PhoBERT model than others on two datasets. We also evaluate the performance of the multilingual model based on the combination of whole SemEval2016 datasets in other languages with the Vietnamese dataset. To the best of our knowledge, our research study is the first attempt at performing various available pretrained language models on aspect category detection task and utilize the datasets from other languages based on multilingual models.
About subordinated generalizations of 3 classical models of option pricing ; In this paper, we investigate the relation between Bachelier and BlackScholes models driven by the infinitely divisible inverse subordinators. Such models, in contrast to their classical equivalents, can be used in markets where periods of stagnation are observed. We introduce the subordinated CoxRossRubinstein model and prove that the price of the underlying in that model converges in distribution and in Skorokhod space to the price of underlying in the subordinated BlackScholes model defined in 31. Motivated by this fact we price the selected option contracts using the binomial trees. The results are compared to other numerical methods.
Smoothing and Shrinking the Sparse Seq2Seq Search Space ; Current sequencetosequence models are trained to minimize crossentropy and use softmax to compute the locally normalized probabilities over target sequences. While this setup has led to strong results in a variety of tasks, one unsatisfying aspect is its length bias models give high scores to short, inadequate hypotheses and often make the empty string the argmax the socalled cat got your tongue problem. Recently proposed entmaxbased sparse sequencetosequence models present a possible solution, since they can shrink the search space by assigning zero probability to bad hypotheses, but their ability to handle wordlevel tasks with transformers has never been tested. In this work, we show that entmaxbased models effectively solve the cat got your tongue problem, removing a major source of model error for neural machine translation. In addition, we generalize label smoothing, a critical regularization technique, to the broader family of FenchelYoung losses, which includes both crossentropy and the entmax losses. Our resulting labelsmoothed entmax loss models set a new state of the art on multilingual graphemetophoneme conversion and deliver improvements and better calibration properties on crosslingual morphological inflection and machine translation for 6 language pairs.
Transfer Learning of Memory Kernels in Coarsegrained Modeling ; The present work concerns the transferability of coarsegrained CG modeling in reproducing the dynamic properties of the reference atomistic systems across a range of parameters. In particular, we focus on implicitsolvent CG modeling of polymer solutions. The CG model is based on the generalized Langevin equation, where the memory kernel plays the critical role in determining the dynamics in all time scales. Thus, we propose methods for transfer learning of memory kernels. The key ingredient of our methods is Gaussian process regression. By integration with the model order reduction via proper orthogonal decomposition and the active learning technique, the transfer learning can be practically efficient and requires minimum training data. Through two example polymer solution systems, we demonstrate the accuracy and efficiency of the proposed transfer learning methods in the construction of transferable memory kernels. The transferability allows for outofsample predictions, even in the extrapolated domain of parameters. Built on the transferable memory kernels, the CG models can reproduce the dynamic properties of polymers in all time scales at different thermodynamic conditions such as temperature and solvent viscosity and for different systems with varying concentrations and lengths of polymers.
Musical Mix Clarity Predication using Decomposition and Perceptual Masking Thresholds ; Objective measurement of perceptually motivated music attributes has application in both target driven mixing and mastering methodologies and music information retrieval. This work proposes a perceptual model of mix clarity which decomposes a mixed input signal into transient, steadystate, and residual components. Masking thresholds are calculated for each component and their relative relationship is used to determine an overall masking score as the model's output. Three variants of the model were tested against subjective mix clarity scores gathered from a controlled listening test. The best performing variant achieved a Spearman's rank correlation of rho 0.8382 p0.01. Furthermore, the model output was analysed using an independent dataset generated by progressively applying degradation effects to the test stimuli. Analysis of the model suggested a close relationship between the proposed model and the subjective mix clarity scores particularly when masking was measured using linearly spaced analysis bands. Moreover, the presence of noiselike residual signals was shown to have a negative effect on the perceived mix clarity.
OpenCV2X Modelling of the V2X Cellular Sidelink and Performance Evaluation for Aperiodic Traffic ; This paper presents OpenCV2X, the first publicly available, opensource simulation model of the Third Generation Partnership Project 3GPP Release 14 Cellular Vehicle to Everything CV2X sidelink, which forms the basis for 5G NR Mode 2 under later releases. This model is fully compliant with the existing vehicular service and application layers, including messaging sets as defined by the automotive and standards communities providing a fully standardised, crosslayer communication model. Using this model, we show how the current sidelink scheduling mechanism performs poorly when scheduling applications with highly aperiodic communication characteristics, such as ETSI Cooperative Awareness Messages CAMs. We then provide the first indepth evaluation of dedicated perpacket aperiodic scheduling mechanisms, in contrast to schemes that parameterise the existing algorithm. This paper highlights that the level of aperiodicity exhibited by the application model greatly impacts scheduling performance. Finally, we analyse how such scheduling mechanisms might coexist.
Finding Geometric Models by Clustering in the Consensus Space ; We propose a new algorithm for finding an unknown number of geometric models, e.g., homographies. The problem is formalized as finding dominant model instances progressively without forming crisp pointtomodel assignments. Dominant instances are found via a RANSAClike sampling and a consolidation process driven by a model quality function considering previously proposed instances. New ones are found by clustering in the consensus space. This new formulation leads to a simple iterative algorithm with stateoftheart accuracy while running in realtime on a number of vision problems at least two orders of magnitude faster than the competitors on twoview motion estimation. Also, we propose a deterministic sampler reflecting the fact that realworld data tend to form spatially coherent structures. The sampler returns connected components in a progressively densified neighborhoodgraph. We present a number of applications where the use of multiple geometric models improves accuracy. These include pose estimation from multiple generalized homographies; trajectory estimation of fastmoving objects; and we also propose a way of using multiple homographies in global SfM algorithms. Source code httpsgithub.comdaniniclusteringinconsensusspace.
AVATAR Blender addon for fast creation of 3D human models ; Create an articulated and realistic human 3D model is a complicated task, not only get a model with the right body proportions but also to the whole process of rigging the model with correct articulation points and vertices weights. Having a tool that can create such a model with just a few clicks will be very advantageous for amateurs developers to use in their projects, researchers to easily generate datasets to train neural networks and industry for game development. We present a software that is integrated in Blender in form of addon that allows us to design and animate a dressed 3D human models based on Makehuman with just a few clicks. Moreover, as it is already integrated in Blender, python scripts can be created to animate, render and further customize the current available options.
Correcting Automated and Manual Speech Transcription Errors using Warped Language Models ; Masked language models have revolutionized natural language processing systems in the past few years. A recently introduced generalization of masked language models called warped language models are trained to be more robust to the types of errors that appear in automatic or manual transcriptions of spoken language by exposing the language model to the same types of errors during training. In this work we propose a novel approach that takes advantage of the robustness of warped language models to transcription noise for correcting transcriptions of spoken language. We show that our proposed approach is able to achieve up to 10 reduction in word error rates of both automatic and manual transcriptions of spoken language.
Constraining viscous dark energy models with the latest cosmological data ; Based on the assumption that the dark energy possessing bulk viscosity is homogenously and isotropically permeated in the universe, we propose three new viscous dark energy VDE models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H0 tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5sigma confidence level in three VDE models as the Planck result. We also give the 95 upper limits of the typical bulk viscosity parameter eta in three VDE scenarios.
Local Collaborative Autoencoders ; TopN recommendation is a challenging problem because complex and sparse useritem interactions should be adequately addressed to achieve highquality recommendation results. The local latent factor approach has been successfully used with multiple local models to capture diverse user preferences with different subcommunities. However, previous studies have not fully explored the potential of local models, and failed to identify many small and coherent subcommunities. In this paper, we present Local Collaborative Autoencoders LOCA, a generalized local latent factor framework. Specifically, LOCA adopts different neighborhood ranges at the training and inference stages. Besides, LOCA uses a novel subcommunity discovery method, maximizing the coverage of a union of local models and employing a large number of diverse local models. By adopting autoencoders as the base model, LOCA captures latent nonlinear patterns representing meaningful useritem interactions within subcommunities. Our experimental results demonstrate that LOCA is scalable and outperforms stateoftheart models on several public benchmarks, by 2.994.70 in Recall and 1.027.95 in NDCG, respectively.
Automated Cleanup of the ImageNet Dataset by Model Consensus, Explainability and Confident Learning ; The convolutional neural networks CNNs trained on ILSVRC12 ImageNet were the backbone of various applications as a generic classifier, a feature extractor or a base model for transfer learning. This paper describes automated heuristics based on model consensus, explainability and confident learning to correct labeling mistakes and remove ambiguous images from this dataset. After making these changes on the training and validation sets, the ImageNetClean improves the model performance by 22.4 for SqueezeNet and EfficientNetB0 models. The results support the importance of larger image corpora and semisupervised learning, but the original datasets must be fixed to avoid transmitting their mistakes and biases to the student learner. Further contributions describe the training impacts of widescreen input resolutions in portrait and landscape orientations. The trained models and scripts are published on Github httpsgithub.comkecsapimagenetclean to clean up ImageNet and ImageNetV2 datasets for reproducible research.
A Sublattice PhaseField Model for Direct CALPHAD Database Coupling ; The phasefield method has been established as a de facto standard for simulating the microstructural evolution of materials. In quantitative modeling the assessment and compilation of thermodynamickinetic data is largely dominated by the CALPHAD approach, which has produced a large set of experimentally and computationally generated Gibbs free energy and atomic mobility data in a standardized format the thermodynamic database TDB file format. Harnessing this data for the purpose of phasefield modeling is an ongoing effort encompassing a wide variety of approaches. In this paper, we aim to directly link CALPHAD data to the phasefield method, without intermediate fitting or interpolation steps. We introduce a model based on the KimKimSuzuki KKS approach. This model includes sublattice site fractions and can directly utilize data from TDB files. Using this approach, we demonstrate the model on the UZr and MoNiRe systems.
On ComputationallyScalable SpatioTemporal Regression Clustering of Precipitation Threshold Excesses ; Focusing on regression based analysis of extremes in a presence of systematically missing covariates, this work presents a datadriven spatiotemporal regression based clustering of threshold excesses. It is shown that in a presence of systematically missing covariates the behavior of threshold excesses becomes nonstationary and nonhomogenous. The presented approach describes this complex behavior by a set of local stationary Generalized Pareto Distribution GPD models, where the parameters are expressed as regression models, and a latent spatiotemporal switching process. The spatiotemporal switching process is resolved by the nonparametric Finite Element Methodology for time series analysis with Bounded Variation of the model parameters FEMBV. The presented FEMBVGPD approach goes beyond strong a priori assumptions made in standard latent class models like Mixture Models and Hidden Markov Models. In addition, it provides a pragmatic description of the underlying dependency structure. The performance of the framework is demonstrated on historical precipitation data for Switzerland and compared with the results obtained by the standard methods on the same data.
Probabilistic Analogical Mapping with Semantic Relation Networks ; The human ability to flexibly reason using analogies with domaingeneral content depends on mechanisms for identifying relations between concepts, and for mapping concepts and their relations across analogs. Building on a recent model of how semantic relations can be learned from nonrelational word embeddings, we present a new computational model of mapping between two analogs. The model adopts a Bayesian framework for probabilistic graph matching, operating on semantic relation networks constructed from distributed representations of individual concepts and of relations between concepts. Through comparisons of model predictions with human performance in a novel mapping task requiring integration of multiple relations, as well as in several classic studies, we demonstrate that the model accounts for a broad range of phenomena involving analogical mapping by both adults and children. We also show the potential for extending the model to deal with analog retrieval. Our approach demonstrates that humanlike analogical mapping can emerge from comparison mechanisms applied to rich semantic representations of individual concepts and relations.
TAPAS at SemEval2021 Task 9 Reasoning over tables with intermediate pretraining ; We present the TAPAS contribution to the Shared Task on Statement Verification and Evidence Finding with Tables SemEval 2021 Task 9, Wang et al. 2021. SEM TAB FACT Task A is a classification task of recognizing if a statement is entailed, neutral or refuted by the content of a given table. We adopt the binary TAPAS model of Eisenschlos et al. 2020 to this task. We learn two binary classification models A first model to predict if a statement is neutral or nonneutral and a second one to predict if it is entailed or refuted. As the shared task training set contains only entailed or refuted examples, we generate artificial neutral examples to train the first model. Both models are pretrained using a MASKLM objective, intermediate counterfactual and synthetic data Eisenschlos et al., 2020 and TABFACT Chen et al., 2020, a large table entailment dataset. We find that the artificial neutral examples are somewhat effective at training the first model, achieving 68.03 test F1 versus the 60.47 of a majority baseline. For the second stage, we find that the pretraining on the intermediate data and TABFACT improves the results over MASKLM pretraining 68.03 vs 57.01.
Speaker conditioned acoustic modeling for multispeaker conversational ASR ; In this paper, we propose a novel approach for the transcription of speech conversations with natural speaker overlap, from single channel speech recordings. The proposed model is a combination of a speaker diarization system and a hybrid automatic speech recognition ASR system. The speaker conditioned acoustic model SCAM in the ASR system consists of a series of embedding layers which use the speaker activity inputs from the diarization system to derive speaker specific embeddings. The output of the SCAM are speaker specific senones that are used for decoding the transcripts for each speaker in the conversation. In this work, we experiment with the automatic speaker activity decisions generated using an endtoend speaker diarization system. A joint learning approach is also proposed where the diarization model and the ASR acoustic model are jointly optimized. The experiments are performed on the mixedchannel two speaker recordings from the Switchboard corpus of telephone conversations. In these experiments, we show that the proposed acoustic model, incorporating speaker activity decisions and joint optimization, improves significantly over the ASR system with explicit source filtering relative improvements of 12 in word error rate WER over the baseline system.
LTLM a novel nonautoregressive language model for singleshot lattice rescoring ; Neural networkbased language models are commonly used in rescoring approaches to improve the quality of modern automatic speech recognition ASR systems. Most of the existing methods are computationally expensive since they use autoregressive language models. We propose a novel rescoring approach, which processes the entire lattice in a single call to the model. The key feature of our rescoring policy is a novel nonautoregressive Lattice Transformer Language Model LTLM. This model takes the whole lattice as an input and predicts a new language score for each arc. Additionally, we propose the artificial lattices generation approach to incorporate a large amount of text data in the LTLM training process. Our singleshot rescoring performs orders of magnitude faster than other rescoring methods in our experiments. It is more than 300 times faster than pruned RNNLM lattice rescoring and Nbest rescoring while slightly inferior in terms of WER.
Explaining Neural Network Predictions on Sentence Pairs via Learning WordGroup Masks ; Explaining neural network models is important for increasing their trustworthiness in realworld applications. Most existing methods generate posthoc explanations for neural network models by identifying individual feature attributions or detecting interactions between adjacent features. However, for models with text pairs as inputs e.g., paraphrase identification, existing methods are not sufficient to capture feature interactions between two texts and their simple extension of computing all wordpair interactions between two texts is computationally inefficient. In this work, we propose the Group Mask GMASK method to implicitly detect word correlations by grouping correlated words from the input text pair together and measure their contribution to the corresponding NLP tasks as a whole. The proposed method is evaluated with two different model architectures decomposable attention model and BERT across four datasets, including natural language inference and paraphrase identification tasks. Experiments show the effectiveness of GMASK in providing faithful explanations to these models.
Taxonomy of Dark Energy Models ; The accelerated expansion of the Universe is one of the main discoveries of the past decades, indicating the presence of an unknown component the dark energy. Evidence of its presence is being gathered by a succession of observational experiments with increasing precision in its measurements. However, the most accepted model for explaining the dynamic of our Universe, the socalled Lambda cold dark matter, face several problems related to the nature of such energy component. This has lead to a growing exploration of alternative models attempting to solve those drawbacks. In this review, we briefly summarize the characteristics of a nonexhaustive list of dark energy models as well as some of the most used cosmological samples. Next, we discuss how to constrain each model's parameters using observational data. Finally, we summarize the status of dark energy modeling.
Learning dynamic and hierarchical traffic spatiotemporal features with Transformer ; Traffic forecasting is an indispensable part of Intelligent transportation systems ITS, and longterm networkwide accurate traffic speed forecasting is one of the most challenging tasks. Recently, deep learning methods have become popular in this domain. As traffic data are physically associated with road networks, most proposed models treat it as a spatiotemporal graph modeling problem and use Graph Convolution Network GCN based methods. These GCNbased models highly depend on a predefined and fixed adjacent matrix to reflect the spatial dependency. However, the predefined fixed adjacent matrix is limited in reflecting the actual dependence of traffic flow. This paper proposes a novel model, Traffic Transformer, for spatialtemporal graph modeling and longterm traffic forecasting to overcome these limitations. Transformer is the most popular framework in Natural Language Processing NLP. And by adapting it to the spatiotemporal problem, Traffic Transformer hierarchically extracts spatiotemporal features through data dynamically by multihead attention and masked multihead attention mechanism, and fuse these features for traffic forecasting. Furthermore, analyzing the attention weight matrixes can find the influential part of road networks, allowing us to learn the traffic networks better. Experimental results on the public traffic network datasets and realworld traffic network datasets generated by ourselves demonstrate our proposed model achieves better performance than the stateoftheart ones.
Dynamics of tachyon dark energy on large scales and its imprint on observed galaxy power spectrum ; In the present work, we study the large scale matter power spectrum as well as the observed galaxy power spectrum for noncanonical tachyon field dark energy model considering the full general relativistic perturbation equations. We form a set of coupled autonomous equations including both the background and linearly perturbed quantities and obtain their solutions numerically with proper set of initial conditions. We consider different scalar field potentials for our study. Deviations from concordance LambdaCDM model are studied for different relevant quantities. Our study shows that noncanonical tachyon dark energy model produces enhanced gravitational potentials, comoving density contrast as well as linear growth factor for matter perturbations compared to LambdaCDM. It is also observed that for tachyon dark energy models, there is suppression of power on large scales compared to both LambdaCDM model as well as previously studied canonical scalar field models.
Realtime Forecast Models for TBM Load Parameters Based on Machine Learning Methods ; Because of the fast advance rate and the improved personnel safety, tunnel boring machines TBMs have been widely used in a variety of tunnel construction projects. The dynamic modeling of TBM load parameters including torque, advance rate and thrust plays an essential part in the design, safe operation and fault prognostics of this complex engineering system. In this paper, based on insitu TBM operational data, we use the machinelearning ML methods to build the realtime forecast models for TBM load parameters, which can instantaneously provide the future values of the TBM load parameters as long as the current data are collected. To decrease the model complexity and improve the generalization, we also apply the least absolute shrinkage and selection Lasso method to extract the essential features of the forecast task. The experimental results show that the forecast models based on deeplearning methods, it e.g., recurrent neural network and its variants, outperform the ones based on the shallowlearning methods, it e.g., support vector regression and random forest. Moreover, the Lassobased feature extraction significantly improves the performance of the resultant models.
Fermion mass hierarchy and g2 anomalies in an extended 3HDM Model ; We propose an extension of the threeHiggsdoublet model 3HDM, where the Standard Model SM particle content is enlarged by the inclusion of two inert SU2L scalar doublets, three inert and two active electrically neutral gauge singlet scalars, charged vector like fermions and Majorana neutrinos. These additional particles are introduced to generate the SM fermion mass hierarchy from a sequential loop suppression mechanism. In our model the top and exotic fermion masses appear at tree level, whereas the remaining fermions get their masses radiatively. Specifically, bottom, charm, tau and muon masses appear at 1loop; the masses for the light up, down and strange quarks as well as for the electron at 2loop and masses for the light active neutrinos at 3loop. Our model successfully accounts for SM fermion masses and mixings and accommodates the observed Dark Matter relic density, the electron and muon anomalous magnetic moments, as well the constraints arising from charged Lepton Flavor Violating LFV processes. The proposed model predicts charged LFV decays within the reach of forthcoming experiments.
A Multiscale Model for El Nino Complexity ; El NinoSouthern Oscillation ENSO exhibits diverse characteristics in spatial pattern, peak intensity, and temporal evolution. Here we develop a threeregion multiscale stochastic model to show that the observed ENSO complexity can be explained by combining intraseasonal, interannual, and decadal processes. The model starts with a deterministic threeregion system for the interannual variabilities. Then two stochastic processes of the intraseasonal and decadal variation are incorporated. The model can reproduce not only the general properties of the observed ENSO events, but also the complexity in patterns e.g., Central Pacific vs. Eastern Pacific events, intensity e.g., 1020 year reoccurrence of extreme El Ninos, and temporal evolution e.g., more multiyear La Ninas than multiyear El Ninos. While conventional conceptual models were typically used to understand the dynamics behind the common properties of ENSO, this model offers a powerful tool to understand and predict ENSO complexity that challenges our understanding of the 21stcentury ENSO.
Towards VariableLength Textual Adversarial Attacks ; Adversarial attacks have shown the vulnerability of machine learning models, however, it is nontrivial to conduct textual adversarial attacks on natural language processing tasks due to the discreteness of data. Most previous approaches conduct attacks with the atomic textitreplacement operation, which usually leads to fixedlength adversarial examples and therefore limits the exploration on the decision space. In this paper, we propose variablelength textual adversarial attacksVLAttack and integrate three atomic operations, namely textitinsertion, textitdeletion and textitreplacement, into a unified framework, by introducing and manipulating a special textitblank token while attacking. In this way, our approach is able to more comprehensively find adversarial examples around the decision boundary and effectively conduct adversarial attacks. Specifically, our method drops the accuracy of IMDB classification by 96 with only editing 1.3 tokens while attacking a pretrained BERT model. In addition, finetuning the victim model with generated adversarial samples can improve the robustness of the model without hurting the performance, especially for lengthsensitive models. On the task of nonautoregressive machine translation, our method can achieve 33.18 BLEU score on IWSLT14 GermanEnglish translation, achieving an improvement of 1.47 over the baseline model.
Manifold Model for HighResolution fMRI Joint Reconstruction and Dynamic Quantification ; Oscillating SteadyState Imaging OSSI is a recent fMRI acquisition method that exploits a large and oscillating signal, and can provide high SNR fMRI. However, the oscillatory nature of the signal leads to an increased number of acquisitions. To improve temporal resolution and accurately model the nonlinearity of OSSI signals, we build the MR physics for OSSI signal generation as a regularizer for the undersampled reconstruction rather than using subspace models that are not well suited for the data. Our proposed physicsbased manifold model turns the disadvantages of OSSI acquisition into advantages and enables joint reconstruction and quantification. OSSI manifold model OSSIMM outperforms subspace models and reconstructs highresolution fMRI images with a factor of 12 acceleration and without spatial or temporal resolution smoothing. Furthermore, OSSIMM can dynamically quantify important physics parameters, including R2 maps, with a temporal resolution of 150 ms.
Multisource Neural Topic Modeling in Multiview Embedding Spaces ; Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in neural topic modeling to address data sparsity in shorttext or small collection of documents. This work presents a novel neural topic modeling framework using multiview embedding spaces 1 pretrained topicembeddings, and 2 pretrained wordembeddings context insensitive from Glove and contextsensitive from BERT models jointly from one or many sources to improve topic quality and better deal with polysemy. In doing so, we first build respective pools of pretrained topic i.e., TopicPool and word embeddings i.e., WordPool. We then identify one or more relevant source domains and transfer knowledge to guide meaningful learning in the sparse target domain. Within neural topic modeling, we quantify the quality of topics and document representations via generalization perplexity, interpretability topic coherence and information retrieval IR using shorttext, longtext, small and large document collections from news and medical domains. Introducing the multisource multiview embedding spaces, we have shown stateoftheart neural topic modeling using 6 source highresource and 5 target lowresource corpora.
Modeling Newsworthiness for LeadGeneration Across Corpora ; Journalists obtain leads, or story ideas, by reading large corpora of government records court cases, proposed bills, etc. However, only a small percentage of such records are interesting documents. We propose a model of newsworthiness aimed at surfacing interesting documents. We train models on automatically labeled corpora published newspaper articles to predict whether each article was a frontpage article i.e., textbfnewsworthy or not i.e., textbfless newsworthy. We transfer these models to unlabeled corpora court cases, bills, citycouncil meeting minutes to rank documents in these corpora on newsworthiness. A finetuned RoBERTa model achieves .93 AUC performance on heldout labeled documents, and .88 AUC on expertvalidated unlabeled corpora. We provide interpretation and visualization for our models.