text
stringlengths
62
2.94k
Dynamical generalization of Yetter's model based on a crossed module of discrete groups ; We construct a dynamical lattice model based on a crossed module of possibly nonabelian finite groups. Its degrees of freedom are defined on links and plaquettes, while gauge transformations are based on vertices and links of the underlying lattice. We specify the Hilbert space, define basic observables including the Hamiltonian and initiate adiscussion on the model's phase diagram. The constructed model generalizes, and in appropriate limits reduces to, topological theories with symmetries described by groups and crossed modules, lattice YangMills theory and 2form electrodynamics. We conclude by reviewing classifying spaces of crossed modules, with an emphasis on the direct relation between their geometry and properties of gauge theories under consideration.
LoopGenerated Neutrino Masses in Composite Higgs Models ; We present a composite scotogenic model for neutrino masses, which are generated via loops of mathbbZ2odd composite scalars. We consider three different approaches to the couplings of the neutrinos including three righthanded singlets and the composite sector ETClike fourfermion interactions, fundamental partial compositeness and fermion partial compositeness. In all cases, the model can feature sizeable couplings and remain viable with respect to various experimental constraints if the three mathbbZ2 odd righthanded neutrinos have masses between the TeV and the Planck scales. Additionally, the lightest mathbbZ2odd composite scalar may play the role of Dark Matter, either via thermal freezeout or as an asymmetric relic. This mechanism can be featured in a variety of models based on vacuum misalignment. For concreteness, we demonstrate it in a composite twoHiggs scheme based on the coset SU6Sp6.
Fracton phases via exotic higherform symmetrybreaking ; We study pstring condensation mechanisms for fracton phases from the viewpoint of higherform symmetry, focusing on the examples of the Xcube model and the ranktwo symmetrictensor U1 scalar charge theory. This work is motivated by questions of the relationship between fracton phases and continuum quantum field theories, and also provides general principles to describe pstring condensation independent of specific lattice model constructions. We give a perspective on higherform symmetry in lattice models in terms of cellular homology. Applying this perspective to the coupledlayer construction of the Xcube model, we identify a foliated 1form symmetry that is broken in the Xcube phase, but preserved in the phase of decoupled toric code layers. Similar considerations for the scalar charge theory lead to a framed 1form symmetry. These symmetries are distinct from standard 1form symmetries that arise, for instance, in relativistic quantum field theory. We also give a general discussion on interpreting pstring condensation, and related constructions involving gauging of symmetry, in terms of higherform symmetry.
Denoising Diffusion Implicit Models ; Denoising diffusion probabilistic models DDPMs have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models DDIMs, a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of nonMarkovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10 times to 50 times faster in terms of wallclock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.
Tackling the Lowresource Challenge for Canonical Segmentation ; Canonical morphological segmentation consists of dividing words into their standardized morphemes. Here, we are interested in approaches for the task when training data is limited. We compare model performance in a simulated lowresource setting for the highresource languages German, English, and Indonesian to experiments on new datasets for the truly lowresource languages Popoluca and Tepehua. We explore two new models for the task, borrowing from the closely related area of morphological generation an LSTM pointergenerator and a sequencetosequence model with hard monotonic attention trained with imitation learning. We find that, in the lowresource setting, the novel approaches outperform existing ones on all languages by up to 11.4 accuracy. However, while accuracy in emulated lowresource scenarios is over 50 for all languages, for the truly lowresource languages Popoluca and Tepehua, our best model only obtains 37.4 and 28.4 accuracy, respectively. Thus, we conclude that canonical segmentation is still a challenging task for lowresource languages.
TransformerGCRF Recovering Chinese Dropped Pronouns with General Conditional Random Fields ; Pronouns are often dropped in Chinese conversations and recovering the dropped pronouns is important for NLP applications such as Machine Translation. Existing approaches usually formulate this as a sequence labeling task of predicting whether there is a dropped pronoun before each token and its type. Each utterance is considered to be a sequence and labeled independently. Although these approaches have shown promise, labeling each utterance independently ignores the dependencies between pronouns in neighboring utterances. Modeling these dependencies is critical to improving the performance of dropped pronoun recovery. In this paper, we present a novel framework that combines the strength of Transformer network with General Conditional Random Fields GCRF to model the dependencies between pronouns in neighboring utterances. Results on three Chinese conversation datasets show that the TransformerGCRF model outperforms the stateoftheart dropped pronoun recovery models. Exploratory analysis also demonstrates that the GCRF did help to capture the dependencies between pronouns in neighboring utterances, thus contributes to the performance improvements.
Slowroll inflation in fR,T gravity and a modified Starobinskylike inflationary model ; In this work, we studied the slowroll approximation of cosmic inflation within the context of fR,T gravity, where R is the scalar curvature, and T is the trace of the energymomentum tensor. By choosing a minimal coupling between matter and gravity, we obtained the modified slowroll parameters, the scalar spectral index ns, the tensor spectral index ntextrmT, and the tensortoscalar ratio r. We computed these quantities for a general powerlaw potential, Natural Quartic Hilltop inflation, and the Starobinsky model, plotting the trajectories on the ns,r plane. We found that one of the parameters of NaturalHilltop models is nontrivially modified. Besides, if the coupling is in the interval 0.5 alpha 5.54, we concluded that the Starobinskylike model predictions are in good agreement with the last Planck measurement, but with the advantage of allowing a wide range of admissible values for r and ntextrmT.
Gaussian MRF Covariance Modeling for Efficient BlackBox Adversarial Attacks ; We study the problem of generating adversarial examples in a blackbox setting, where we only have access to a zeroth order oracle, providing us with loss function evaluations. Although this setting has been investigated in previous work, most past approaches using zeroth order optimization implicitly assume that the gradients of the loss function with respect to the input images are emphunstructured. In this work, we show that in fact substantial correlations exist within these gradients, and we propose to capture these correlations via a Gaussian Markov random field GMRF. Given the intractability of the explicit covariance structure of the MRF, we show that the covariance structure can be efficiently represented using the Fast Fourier Transform FFT, along with lowrank updates to perform exact posterior estimation under this model. We use this modeling technique to find fast onestep adversarial attacks, akin to a blackbox version of the Fast Gradient Sign MethodFGSM, and show that the method uses fewer queries and achieves higher attack success rates than the current state of the art. We also highlight the general applicability of this gradient modeling setup.
Dual Inference for Improving Language Understanding and Generation ; Natural language understanding NLU and Natural language generation NLG tasks hold a strong dual relationship, where NLU aims at predicting semantic labels based on natural language utterances and NLG does the opposite. The prior work mainly focused on exploiting the duality in model training in order to obtain the models with better performance. However, regarding the fastgrowing scale of models in the current NLP area, sometimes we may have difficulty retraining whole NLU and NLG models. To better address the issue, this paper proposes to leverage the duality in the inference stage without the need of retraining. The experiments on three benchmark datasets demonstrate the effectiveness of the proposed method in both NLU and NLG, providing the great potential of practical usage.
Evaluation of memory effects at phase transitions and during relaxation processes ; We propose to describe the dynamics of phase transitions in terms of a nonstationary Generalized Langevin Equation for the order parameter. By construction, this equation is nonlocal in time, i.e.it involves memory effects whose intensity is governed by a memory kernel. In general, it is a hard task to determine the physical origin and the extent of the memory effects based on the underlying microscopic equations of motion. Therefore we propose to relate the extent of the memory kernel to quantities that are experimentally observed such as the induction time and the duration of the phase transformation process. Using a simple kinematic model, we show that the extent of the memory kernel is positively correlated with the duration of the transition and of the same order of magnitude, while the distribution of induction times does not have an effect. This observation is tested at the example of several model systems, for which we have run Molecular Dynamics simulations a modified Pottsmodel, a dipole gas, an anharmonic spring in a bath and a nucleation problem. All these cases are shown to be consistent with the simple theoretical model.
Dynamic Walking Toward Agile and Efficient Bipedal Robots ; Dynamic walking on bipedal robots has evolved from an idea in science fiction to a practical reality. This is due to continued progress in three key areas a mathematical understanding of locomotion, the computational ability to encode this mathematics through optimization, and the hardware capable of realizing this understanding in practice. In this context, this review article outlines the endtoend process of methods which have proven effective in the literature for achieving dynamic walking on bipedal robots. We begin by introducing mathematical models of locomotion, from reduced order models that capture essential walking behaviors to hybrid dynamical systems that encode the full order continuous dynamics along with discrete footstrike dynamics. These models form the basis for gait generation via nonlinear optimization problems. Finally, models and their generated gaits merge in the context of realtime control, wherein walking behaviors are translated to hardware. The concepts presented are illustrated throughout in simulation, and experimental instantiation on multiple walking platforms are highlighted to demonstrate the ability to realize dynamic walking on bipedal robots that is agile and efficient.
Natural Language Rationales with FullStack Visual Reasoning From Pixels to Semantic Frames to Commonsense Graphs ; Natural language rationales could provide intuitive, higherlevel explanations that are easily understandable by humans, complementing the more broadly studied lowerlevel explanations based on gradients or attention weights. We present the first study focused on generating natural language rationales across several complex visual reasoning tasks visual commonsense reasoning, visualtextual entailment, and visual question answering. The key challenge of accurate rationalization is comprehensive image understanding at all levels not just their explicit content at the pixel level, but their contextual contents at the semantic and pragmatic levels. We present RationaleVT Transformer, an integrated model that learns to generate freetext rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs. Our experiments show that the base pretrained language model benefits from visual adaptation and that freetext rationalization is a promising research direction to complement model interpretability for complex visualtextual reasoning tasks.
Shortterm Wind Speed Forecasting based on LSSVM Optimized by Elitist QPSO ; Nowadays, wind power is considered as one of the most widely used renewable energy applications due to its efficient energy use and low pollution. In order to maintain high integration of wind power into the electricity market, efficient models for wind speed forecasting are in high demand. The nonstationary and nonlinear characteristics of wind speed, however, makes the task of wind speed forecasting challenging. LSSVM has proven to be a good forecasting algorithm mainly for timeseries applications such as wind data. To boost the learning performance and generalization capability of the algorithm, LSSVM has two hyperparameters, known as the regularization and kernel parameters, that require careful tuning. In this paper, a modified QPSO algorithm is proposed that uses the principle of transposon operators to breed the personal best and global best particles of QPSO and improve global searching capabilities. The optimization algorithm is then used to generate optimum values for the LSSVM hyperparameters. Finally, the performance of the proposed model is compared with previously known PSO and QPSO optimized LSSVM models. Empirical results show that the proposed model displayed improved performance compared to the competitive methods.
Averaging Principles for Markovian Models of Plasticity ; Mathematical models of biological neural networks are associated to a rich and complex class of stochastic processes. In this paper, we consider a simple em plastic neural network whose em connectivitysynaptic strength Wt depends on a set of activitydependent processes to model em synaptic plasticity, a wellstudied mechanism from neuroscience. A general class of stochastic models has been introduced in citerobertmathematical2020 to study the stochastic process Wt. It has been observed experimentally that its dynamics occur on much slower timescale than that of the main cellular processes. The purpose of this paper is to establish limit theorems for the distribution of Wt with respect to the fast timescale of neuronal processes. The central result of the paper is an averaging principle for the stochastic process Wt. Mathematically, the key variable is the point process whose jumps occur at the instants of neuronal spikes. A thorough analysis of several of its unbounded additive functionals is achieved in the slowfast limit. Additionally, technical results on interacting shotnoise processes are developed and used in the general proof of the averaging principle.
Semisupervised Learning by Latent Space EnergyBased Model of SymbolVector Coupling ; This paper proposes a latent space energybased prior model for semisupervised learning. The model stands on a generator network that maps a latent vector to the observed example. The energy term of the prior model couples the latent vector and a symbolic onehot vector, so that classification can be based on the latent vector inferred from the observed example. In our learning method, the symbolvector coupling, the generator network and the inference network are learned jointly. Our method is applicable to semisupervised learning in various data domains such as image, text, and tabular data. Our experiments demonstrate that our method performs well on semisupervised learning tasks.
Modelindependent energy budget for LISA ; We provide an easy method to obtain the kinetic energy fraction in gravitational waves, generated during a cosmological firstorder phase transition, as a function of only the wall velocity and quantities that can be determined from the particle physics model at the nucleation temperature. This generalizes recent work that achieved this goal for detonations. Here we present the corresponding results for deflagrations and hybrids. Unlike for detonations, the sound speed in the symmetric phase also enters the analysis. We perform a detailed comparison between our modelindependent approach and other approaches in the literature. We provide a Python code snippet to determine the kinetic energy fraction K as a function of the wall velocity, the two speeds of sound and the strength parameter of the phase transition. We also assess how realistic sizable deviations in speed of sound are close to the phase transition temperature in a specific model.
Local Knowledge Powered Conversational Agents ; Stateoftheart conversational agents have advanced significantly in conjunction with the use of large transformerbased language models. However, even with these advancements, conversational agents still lack the ability to produce responses that are informative and coherent with the local context. In this work, we propose a dialog framework that incorporates both local knowledge as well as users' past dialogues to generate high quality conversations. We introduce an approach to build a dataset based on Reddit conversations, where outbound URL links are widely available in the conversations and the hyperlinked documents can be naturally included as local external knowledge. Using our framework and dataset, we demonstrate that incorporating local knowledge can largely improve informativeness, coherency and realisticness measures using human evaluations. In particular, our approach consistently outperforms the stateoftheart conversational model on the Reddit dataset across all three measures. We also find that scaling the size of our models from 117M to 8.3B parameters yields consistent improvement of validation perplexity as well as human evaluated metrics. Our model with 8.3B parameters can generate humanlike responses as rated by various human evaluations in a singleturn dialog setting.
Variational Dynamic Mixtures ; Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict modeaveraged dynamics. Modeaveraging is problematic since many realworld sequences are highly multimodal, and their averaged dynamics are unphysical e.g., predicted taxi trajectories might run through buildings on the street map. To better capture multimodality, we develop variational dynamic mixtures VDM a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multimodal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multimodal datasets from different domains.
PBoS Probabilistic BagofSubwords for Generalizing Word Embedding ; We look into the task of emphgeneralizing word embeddings given a set of pretrained word vectors over a finite vocabulary, the goal is to predict embedding vectors for outofvocabulary words, emphwithout extra contextual information. We rely solely on the spellings of words and propose a model, along with an efficient algorithm, that simultaneously models subword segmentation and computes subwordbased compositional word embedding. We call the model probabilistic bagofsubwords PBoS, as it applies bagofsubwords for all possible segmentations based on their likelihood. Inspections and affix prediction experiment show that PBoS is able to produce meaningful subword segmentations and subword rankings without any source of explicit morphological knowledge. Word similarity and POS tagging experiments show clear advantages of PBoS over previous subwordlevel models in the quality of generated word embeddings across languages.
One Model to Reconstruct Them All A Novel Way to Use the Stochastic Noise in StyleGAN ; Generative Adversarial Networks GANs have achieved stateoftheart performance for several image generation and manipulation tasks. Different works have improved the limited understanding of the latent space of GANs by embedding images into specific GAN architectures to reconstruct the original images. We present a novel StyleGANbased autoencoder architecture, which can reconstruct images with very high quality across several data domains. We demonstrate a previously unknown grade of generalizablility by training the encoder and decoder independently and on different datasets. Furthermore, we provide new insights about the significance and capabilities of noise inputs of the wellknown StyleGAN architecture. Our proposed architecture can handle up to 40 images per second on a single GPU, which is approximately 28x faster than previous approaches. Finally, our model also shows promising results, when compared to the stateoftheart on the image denoising task, although it was not explicitly designed for this task.
Knowledge Graph Based Synthetic Corpus Generation for KnowledgeEnhanced Language Model Pretraining ; Prior work on DataToText Generation, the task of converting knowledge graph KG triples into natural text, focused on domainspecific benchmark datasets. In this paper, however, we verbalize the entire English Wikidata KG, and discuss the unique challenges associated with a broad, opendomain, largescale verbalization. We further show that verbalizing a comprehensive, encyclopedic KG like Wikidata can be used to integrate structured KGs and natural language corpora. In contrast to the many architectures that have been developed to integrate these two sources, our approach converts the KG into natural text, allowing it to be seamlessly integrated into existing language models. It carries the further advantages of improved factual accuracy and reduced toxicity in the resulting language model. We evaluate this approach by augmenting the retrieval corpus in a retrieval language model and showing significant improvements on the knowledge intensive tasks of open domain QA and the LAMA knowledge probe.
Improving Multilingual Models with LanguageClustered Vocabularies ; Stateoftheart multilingual models depend on vocabularies that cover all of the languages the model will expect to see at inference time, but the standard methods for generating those vocabularies are not ideal for massively multilingual applications. In this work, we introduce a novel procedure for multilingual vocabulary generation that combines the separately trained vocabularies of several automatically derived language clusters, thus balancing the tradeoff between crosslingual subword sharing and languagespecific vocabularies. Our experiments show improvements across languages on key multilingual benchmark tasks TyDi QA 2.9 F1, XNLI 2.1, and WikiAnn NER 2.8 F1 and factor of 8 reduction in outofvocabulary rate, all without increasing the size of the model or data.
Graph Information Bottleneck ; Representation learning of graphstructured data is challenging because both graph structure and node features carry important information. Graph Neural Networks GNNs provide an expressive way to fuse information from network structure and node features. However, GNNs are prone to adversarial attacks. Here we introduce Graph Information Bottleneck GIB, an informationtheoretic principle that optimally balances expressiveness and robustness of the learned representation of graphstructured data. Inheriting from the general Information Bottleneck IB, GIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target, and simultaneously constraining the mutual information between the representation and the input data. Different from the general IB, GIB regularizes the structural as well as the feature information. We design two sampling algorithms for structural regularization and instantiate the GIB principle with two new models GIBCat and GIBBern, and demonstrate the benefits by evaluating the resilience to adversarial attacks. We show that our proposed models are more robust than stateoftheart graph defense models. GIBbased models empirically achieve up to 31 improvement with adversarial perturbation of the graph structure as well as node features.
XLVIN eXecuted Latent Value Iteration Nets ; Value Iteration Networks VINs have emerged as a popular method to incorporate planning algorithms within deep reinforcement learning, enabling performance improvements on tasks requiring longrange reasoning and understanding of environment dynamics. This came with several limitations, however the model is not incentivised in any way to perform meaningful planning computations, the underlying state space is assumed to be discrete, and the Markov decision process MDP is assumed fixed and known. We propose eXecuted Latent Value Iteration Networks XLVINs, which combine recent developments across contrastive selfsupervised learning, graph representation learning and neural algorithmic reasoning to alleviate all of the above limitations, successfully deploying VINstyle models on generic environments. XLVINs match the performance of VINlike models when the underlying MDP is discrete, fixed and known, and provides significant improvements to modelfree baselines across three general MDP setups.
A Real TripletSinglet Extended Standard Model Dark Matter and Collider Phenomenology ; We examine the collider and dark matter phenomenology of the Standard Model extended by a hyperchargezero SU2 triplet scalar and gauge singlet scalar. In particular, we study the scenario where the singlet and triplet are both charged under a single mathbbZ2 symmetry. We find that such an extension is capable of generating the observed dark matter density, while also modifying the collider phenomenology such that the lower bound on the mass of the triplet is smaller than in minimal triplet scalar extensions to the Standard Model. A high triplet mass is in tension with the parameter space that leads to novel electroweak phase transitions in the early universe. Therefore, the lower triplet masses that are permitted in this extended model are of particular importance for the prospects of successful electroweak baryogenesis and the generation of gravitational waves from early universe phase transitions.
Spiking Neural Networks Part I Detecting Spatial Patterns ; Spiking Neural Networks SNNs are biologically inspired machine learning models that build on dynamic neuronal models processing binary and sparse spiking signals in an eventdriven, online, fashion. SNNs can be implemented on neuromorphic computing platforms that are emerging as energyefficient coprocessors for learning and inference. This is the first of a series of three papers that introduce SNNs to an audience of engineers by focusing on models, algorithms, and applications. In this first paper, we first cover neural models used for conventional Artificial Neural Networks ANNs and SNNs. Then, we review learning algorithms and applications for SNNs that aim at mimicking the functionality of ANNs by detecting or generating spatial patterns in rateencoded spiking signals. We specifically discuss ANNtoSNN conversion and neural sampling. Finally, we validate the capabilities of SNNs for detecting and generating spatial patterns through experiments.
Field theory generalizations of twobody CalogeroMoser models in the form of LandauLifshitz equations ; We give detailed description for continuous version of the classical IRFVertex relation, where on the IRF side we deal with the CalogeroMoserSutherland models. Our study is based on constructing modifications of the Higgs bundles of infinite rank over elliptic curve and its degenerations. In this way the previously predicted gauge equivalence between LA pairs of the LandauLifshitz type equations and 11 field theory generalization of the CalogeroMoserSutherland models is described. In this paper the rm sl2 case is studied. Explicit changes of variables are obtained between the rational, trigonometric and elliptic models.
Exploring Generative Adversarial Networks for ImagetoImage Translation in STEM Simulation ; The use of accurate scanning transmission electron microscopy STEM image simulation methods require large computation times that can make their use infeasible for the simulation of many images. Other simulation methods based on linear imaging models, such as the convolution method, are much faster but are too inaccurate to be used in application. In this paper, we explore deep learning models that attempt to translate a STEM image produced by the convolution method to a prediction of the high accuracy multislice image. We then compare our results to those of regression methods. We find that using the deep learning model Generative Adversarial Network GAN provides us with the best results and performs at a similar accuracy level to previous regression models on the same dataset. Codes and data for this project can be found in this GitHub repository, httpsgithub.comuwcmgGANSTEMConv2MultiSlice.
Cortex A Compiler for Recursive Deep Learning Models ; Optimizing deep learning models is generally performed in two steps i highlevel graph optimizations such as kernel fusion and ii low level kernel optimizations such as those found in vendor libraries. This approach often leaves significant performance on the table, especially for the case of recursive deep learning models. In this paper, we present Cortex, a compilerbased approach to generate highlyefficient code for recursive models for low latency inference. Our compiler approach and low reliance on vendor libraries enables us to perform endtoend optimizations, leading to up to 14X lower inference latencies over past work, across different backends.
Extensions of the noncommutative Standard Model and the weak order one condition ; In the derivation of the Standard Model from the axioms of Noncommutative Geometry, the scalar sector is given by a finite Dirac operator which has to satisfy the socalled emphfirstorder condition. However, the general solution to this constraint still has unphysical terms which must be finetuned to zero. Moreover, the firstorder condition generally does not survive in extensions to models with gauge groups larger that U1times SU2times SU3. In this paper we show that in the U1rm BLextension one can implement a weaker form of the firstorder condition which we argue is necessary in order for Noncommutative Gauge Theory to make sense at all, and that this condition reduce the amount of finetuning to the offdiagonal terms in the Yukawa mass matrices for the leptons and quarks. We also show that this condition eliminates the Majorana mass terms for righthanded neutrinos when it is applied to the PatiSalam model.
Primordial black hole dark matter in dilatonextended twofield Starobinsky inflation ; We investigate the production of primordial black holes and their contribution to the presently observed dark matter in a dilaton twofield extension of Starobinsky's quadratic fR model of inflation. The model features a multifield amplification mechanism which leads to the generation of a sharp peak in the inflationary power spectrum at small wavelengths responsible for the production of primordial black holes. This mechanism is significantly different from singlefield models and requires a stochastic treatment during an intermediate phase of the inflationary dynamics. We find that the model leads to a successful phase of effective singlefield Starobinsky inflation for wavelengths probed by the cosmic microwave background radiation and explains the observed cold dark matter content in the Universe by the formation of primordial black holes.
Exploring the Value of Personalized Word Embeddings ; In this paper, we introduce personalized word embeddings, and examine their value for language modeling. We compare the performance of our proposed prediction model when using personalized versus generic word representations, and study how these representations can be leveraged for improved performance. We provide insight into what types of words can be more accurately predicted when building personalized models. Our results show that a subset of words belonging to specific psycholinguistic categories tend to vary more in their representations across users and that combining generic and personalized word embeddings yields the best performance, with a 4.7 relative reduction in perplexity. Additionally, we show that a language model using personalized word embeddings can be effectively used for authorship attribution.
Grasping with Chopsticks Combating Covariate Shift in Modelfree Imitation Learning for Fine Manipulation ; Billions of people use chopsticks, a simple yet versatile tool, for fine manipulation of everyday objects. The small, curved, and slippery tips of chopsticks pose a challenge for picking up small objects, making them a suitably complex test case. This paper leverages human demonstrations to develop an autonomous chopsticksequipped robotic manipulator. Due to the lack of accurate models for fine manipulation, we explore modelfree imitation learning, which traditionally suffers from the covariate shift phenomenon that causes poor generalization. We propose two approaches to reduce covariate shift, neither of which requires access to an interactive expert or a model, unlike previous approaches. First, we alleviate singlestep prediction errors by applying an invariant operator to increase the data support at critical steps for grasping. Second, we generate synthetic corrective labels by adding bounded noise and combining parametric and nonparametric methods to prevent error accumulation. We demonstrate our methods on a real chopstickequipped robot that we built, and observe the agent's success rate increase from 37.3 to 80, which is comparable to the human expert performance of 82.6.
Dynamics of nonminimally coupled scalar field models with generic potentials in FLRW background ; We study the phase space analysis of a nonminimally coupled scalar field model with different potentials such as KKLT, Higgs, inverse and inverse square. Our investigation brings new asymptotic regimes, and obtains stable deSitter solution. In case of KKLT, we do not find stable deSitter solution whereas Higgs model satisfies the deSitter condition but does not provide a stable deSitter solution in usual sense as one of the eigenvalue is zero. We obtain time derivative of Hubble constant dotH0, equation of state wphisimeq 1, scalar field phiconstant and the positive effective gravitational constant Geff0, which are missed in our earlier work. Therefore, in case of FphiR coupling with Fphi 1xiphi2 and the models of inverse and inverse square potentials a true stable deSitter solution is trivially satisfied.
SubdiffusiveBrownian crossover in membrane proteins a Generalized Langevin Equationbased approach ; In this paper, we propose a Generalized Langevin Equation GLEbased model to describe the lateral diffusion of a protein in a lipid bilayer. The memory kernel is represented in terms of a viscous instantaneous and an elastic non instantaneous component modeled respectively through a Dirac delta function and a threeparameter MittagLeffler type function. By imposing a specific relationship between the parameters of the threeparameters MittagLeffler function, the different dynamical regimes, namely ballistic, subdiffusive and Brownian, as well as the crossover from one regime to another, are retrieved. Within this approach, the transition time from the ballistic to the subdiffusive regime and the distribution of relaxation times underlying the transition from the subdiffusive to the Brownian regime are given. The reliability of the model is tested by comparing the Mean Squared Displacement MSD derived in the framework of this model and the MSD of a protein diffusing in a membrane calculated through molecular dynamics MD simulations.
Matrix Moments in a Real, Doubly Correlated Algebraic Generalization of the Wishart Model ; The Wishart model of random covariance or correlation matrices continues to find ever more applications as the wealth of data on complex systems of all types grows. The heavy tails often encountered prompt generalizations of the Wishart model, involving algebraic distributions instead of a Gaussian. The mathematical properties pose new challenges, particularly for the doubly correlated versions. Here we investigate such a doubly correlated algebraic model for real covariance or correlation matrices. We focus on the matrix moments and explicitly calculate the first and the second one, the computation of the latter is nontrivial. We solve the problem by relating it to the Aomoto integral and by extending the recursive technique to calculate InghamSiegel integrals. We compare our results with the Gaussian case.
Generalized Posteriors in Approximate Bayesian Computation ; Complex simulators have become a ubiquitous tool in many scientific disciplines, providing highfidelity, implicit probabilistic models of natural and social phenomena. Unfortunately, they typically lack the tractability required for conventional statistical analysis. Approximate Bayesian computation ABC has emerged as a key method in simulationbased inference, wherein the true model likelihood and posterior are approximated using samples from the simulator. In this paper, we draw connections between ABC and generalized Bayesian inference GBI. First, we reinterpret the acceptreject step in ABC as an implicitly defined error model. We then argue that these implicit error models will invariably be misspecified. While ABC posteriors are often treated as a necessary evil for approximating the standard Bayesian posterior, this allows us to reinterpret ABC as a potential robustification strategy. This leads us to suggest the use of GBI within ABC, a use case we explore empirically.
An application of ZeroOne Inflated Beta regression models for predicting health insurance reimbursement ; In actuarial practice the dependency between contract limitations deductibles, copayments and health care expenditures are measured by the application of the Monte Carlo simulation technique. We propose, for the same goal, an alternative approach based on Generalized Linear Model for Location, Scale and Shape GAMLSS. We focus on the estimate of the ratio between the oneyear reimbursement amount after the effect of limitations and the one year expenditure before the effect of limitations. We suggest a regressive model to investigate the relation between this response variable and a set of covariates, such as limitations and other rating factors related to health risk. In this way a dependency structure between reimbursement and limitations is provided. The density function of the ratio is a mixture distribution, indeed it can continuously assume values mass at 0 and 1, in addition to the probability density within 0, 1 . This random variable does not belong to the exponential family, then an ordinary Generalized Linear Model is not suitable. GAMLSS introduces a probability structure compliant with the density of the response variable, in particular zeroone inflated beta density is assumed. The latter is a mixture between a Bernoulli distribution and a Beta distribution.
Disorder solutions for the partition functions of the twodimensional Isinglike models ; For the generalized Ising models with all possible interactions within a face of the square lattice the formulas for finding partition function and free energy per lattice site in the thermodynamic limit were derived on a certain, in the general case, 8dimensional subset of exact disordered solutions of 10dimensional set of the Hamiltonian's parameters. When a part of parameters are set to zero, as a consequence, the disorder solutions were got for the models with nearest, nextnearestneighbor and the interaction of four spins in an external field and without an external field, triangular and checkerboardtriangular Ising models with triple interactions in an external magnetic field.
Does BERT Understand Sentiment Leveraging Comparisons Between Contextual and NonContextual Embeddings to Improve AspectBased Sentiment Models ; When performing Polarity Detection for different words in a sentence, we need to look at the words around to understand the sentiment. Massively pretrained language models like BERT can encode not only just the words in a document but also the context around the words along with them. This begs the questions, Does a pretrain language model also automatically encode sentiment information about each word and Can it be used to infer polarity towards different aspects. In this work we try to answer this question by showing that training a comparison of a contextual embedding from BERT and a generic word embedding can be used to infer sentiment. We also show that if we finetune a subset of weights the model built on comparison of BERT and generic word embedding, it can get state of the art results for Polarity Detection in Aspect Based Sentiment Classification datasets.
Distributed Additive Encryption and Quantization for Privacy Preserving Federated Deep Learning ; Homomorphic encryption is a very useful gradient protection technique used in privacy preserving federated learning. However, existing encrypted federated learning systems need a trusted third party to generate and distribute key pairs to connected participants, making them unsuited for federated learning and vulnerable to security risks. Moreover, encrypting all model parameters is computationally intensive, especially for large machine learning models such as deep neural networks. In order to mitigate these issues, we develop a practical, computationally efficient encryption based protocol for federated deep learning, where the key pairs are collaboratively generated without the help of a third party. By quantization of the model parameters on the clients and an approximated aggregation on the server, the proposed method avoids encryption and decryption of the entire model. In addition, a threshold based secret sharing technique is designed so that no one can hold the global private key for decryption, while aggregated ciphertexts can be successfully decrypted by a threshold number of clients even if some clients are offline. Our experimental results confirm that the proposed method significantly reduces the communication costs and computational complexity compared to existing encrypted federated learning without compromising the performance and security.
Handling Initial Conditions in Vector Fitting for Real Time Modeling of Power System Dynamics ; This paper develops a predictive modeling algorithm, denoted as RealTime Vector Fitting RTVF, which is capable of approximating the realtime linearized dynamics of multiinput multioutput MIMO dynamical systems via rational transfer function matrices. Based on a generalization of the wellknown TimeDomain Vector Fitting TDVF algorithm, RTVF is suitable for online modeling of dynamical systems which experience both initialstate decay contributions in the measured output signals and concurrently active input signals. These adaptations were specifically contrived to meet the needs currently present in the electrical power systems community, where realtime modeling of low frequency power system dynamics is becoming an increasingly coveted tool by power system operators. After introducing and validating the RTVF scheme on synthetic test cases, this paper presents a series of numerical tests on highorder closedloop generator systems in the IEEE 39bus test system.
Simultaneous inference for timevarying models ; A general class of timevarying regression models is considered in this paper. We estimate the regression coefficients by using local linear Mestimation. For these estimators, weak Bahadur representations are obtained and are used to construct simultaneous confidence bands. For practical implementation, we propose a bootstrap based method to circumvent the slow logarithmic convergence of the theoretical simultaneous bands. Our results substantially generalize and unify the treatments for several timevarying regression and autoregression models. The performance for ARCH and GARCH models is studied in simulations and a few reallife applications of our study are presented through analysis of some popular financial datasets.
Common origin of radiative neutrino mass, dark matter and leptogenesis in scotogenic GeorgiMachacek model ; We explore the phenomenology of the GeorgiMachacek model extended with two Higgs doublets and vector fermion doublets invariant under SU2L times U1Ytimes mathcal Z4 times mathcal Z2. The mathcal Z4 symmetry is broken spontaneously while the imposed mathcal Z2 symmetry forbids triplet fields to generate any vacuum expectation value and leading to an inert dark sector providing a viable candidate for dark matter and generate neutrino mass radiatively. Another interesting feature of the model is leptogenesis arising from decay of vectorlike fermions. A detailed study of the model is pursued in search for available parameter space consistent with the theoretical and experimental observations for dark matter, neutrino physics, flavor physics, matterantimatter asymmetry in the Universe.
MultiModal Detection of Alzheimer's Disease from Speech and Text ; Reliable detection of the prodromal stages of Alzheimer's disease AD remains difficult even today because, unlike other neurocognitive impairments, there is no definitive diagnosis of AD in vivo. In this context, existing research has shown that patients often develop language impairment even in mild AD conditions. We propose a multimodal deep learning method that utilizes speech and the corresponding transcript simultaneously to detect AD. For audio signals, the proposed audiobased network, a convolutional neural network CNN based model, predicts the diagnosis for multiple speech segments, which are combined for the final prediction. Similarly, we use contextual embedding extracted from BERT concatenated with a CNNgenerated embedding for classifying the transcript. The individual predictions of the two models are then combined to make the final classification. We also perform experiments to analyze the model performance when Automated Speech Recognition ASR system generated transcripts are used instead of manual transcription in the textbased model. The proposed method achieves 85.3 10fold crossvalidation accuracy when trained and evaluated on the Dementiabank Pitt corpus.
Meshless physicsinformed deep learning method for threedimensional solid mechanics ; Deep learning and the collocation method are merged and used to solve partial differential equations describing structures' deformation. We have considered different types of materials linear elasticity, hyperelasticity neoHookean with large deformation, and von Mises plasticity with isotropic and kinematic hardening. The performance of this deep collocation method DCM depends on the architecture of the neural network and the corresponding hyperparameters. The presented DCM is meshfree and avoids any spatial discretization, which is usually needed for the finite element method FEM. We show that the DCM can capture the response qualitatively and quantitatively, without the need for any data generation using other numerical methods such as the FEM. Data generation usually is the main bottleneck in most datadriven models. The deep learning model is trained to learn the model's parameters yielding accurate approximate solutions. Once the model is properly trained, solutions can be obtained almost instantly at any point in the domain, given its spatial coordinates. Therefore, the deep collocation method is potentially a promising standalone technique to solve partial differential equations involved in the deformation of materials and structural systems as well as other physical phenomena.
Conservative semiLagrangian schemes for a general consistent BGK model for inert gas mixtures ; In this work, we propose a class of high order semiLagrangian scheme for a general consistent BGK model for inert gas mixtures. The proposed scheme not only fulfills indifferentiability principle, but also asymptotic preserving property, which allows us to capture the behaviors of hydrodynamic limit models. We consider two hydrodynamic closure which can be derived from the BGK model at leading order classical Euler equations for number densities, global velocity and temperature, and a multivelocities and temperatures Euler system. Numerical simulations are performed to demonstrate indifferentiability principle and asymptotic preserving property of the proposed conservative semiLagrangian scheme to the Euler limits.
Dataset of Random Relaxations for Crystal Structure Search of LiSi System ; Crystal structure search is a longstanding challenge in materials design. We present a dataset of more than 100,000 structural relaxations of potential battery anode materials from randomized structures using density functional theory calculations. We illustrate the usage of the dataset by training graph neural networks to predict structural relaxations from randomly generated structures. Our models directly predict stresses in addition to forces, which allows them to accurately simulate relaxations of both ionic positions and lattice vectors. We show that models trained on the molecular dynamics simulations fail to simulate relaxations from random structures, while training on our data leads to up to two orders of magnitude decrease in error for the same task. Our model is able to find an experimentally verified structure of a stoichiometry held out from training. We find that randomly perturbing atomic positions during training improves both the accuracy and out of domain generalization of the models.
Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction ; We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face. Digitally modeling and reconstructing a talking human is a key buildingblock for a variety of applications. Especially, for telepresence applications in AR or VR, a faithful reproduction of the appearance including novel viewpoints or headposes is required. In contrast to stateoftheart approaches that model the geometry and material properties explicitly, or are purely imagebased, we introduce an implicit representation of the head based on scene representation networks. To handle the dynamics of the face, we combine our scene representation network with a lowdimensional morphable model which provides explicit control over pose and expressions. We use volumetric rendering to generate images from this hybrid representation and demonstrate that such a dynamic neural scene representation can be learned from monocular input data only, without the need of a specialized capture setup. In our experiments, we show that this learned volumetric representation allows for photorealistic image generation that surpasses the quality of stateoftheart videobased reenactment methods.
Understanding Interpretability by generalized distillation in Supervised Classification ; The ability to interpret decisions taken by Machine Learning ML models is fundamental to encourage trust and reliability in different practical applications. Recent interpretation strategies focus on human understanding of the underlying decision mechanisms of the complex ML models. However, these strategies are restricted by the subjective biases of humans. To dissociate from such human biases, we propose an interpretationbydistillation formulation that is defined relative to other ML models. We generalize the distillation technique for quantifying interpretability, using an informationtheoretic perspective, removing the role of groundtruth from the definition of interpretability. Our work defines the entropy of supervised classification models, providing bounds on the entropy of PieceWise Linear Neural Networks PWLNs, along with the first theoretical bounds on the interpretability of PWLNs. We evaluate our proposed framework on the MNIST, FashionMNIST and Stanford40 datasets and demonstrate the applicability of the proposed theoretical framework in different supervised classification scenarios.
Accurate and Fast Federated Learning via Combinatorial MultiArmed Bandits ; Federated learning has emerged as an innovative paradigm of collaborative machine learning. Unlike conventional machine learning, a global model is collaboratively learned while data remains distributed over a tremendous number of client devices, thus not compromising user privacy. However, several challenges still remain despite its glowing popularity; above all, the global aggregation in federated learning involves the challenge of biased model averaging and lack of prior knowledge in client sampling, which, in turn, leads to high generalization error and slow convergence rate, respectively. In this work, we propose a novel algorithm called FedCM that addresses the two challenges by utilizing prior knowledge with multiarmed bandit based client sampling and filtering biased models with combinatorial model averaging. Based on extensive evaluations using various algorithms and representative heterogeneous datasets, we showed that FedCM significantly outperformed the stateoftheart algorithms by up to 37.25 and 4.17 times, respectively, in terms of generalization accuracy and convergence rate.
Bayesian model for early dosefinding in phase I trials with multiple treatment courses ; Dosefinding clinical trials in oncology aim to determine the maximum tolerated dose MTD of a new drug, generally defined by the proportion of patients with shortterm doselimiting toxicities DLTs. Modelbased approaches for such phase I oncology trials have been widely designed and are mostly restricted to the DLTs occurring during the first cycle of treatment, although patients continue to receive treatment for multiple cycles. We aim to estimate the probability of DLTs over sequences of treatment cycles via a Bayesian cumulative modeling approach, where the probability of DLT is modeled taking into account the cumulative effect of the administered drug and the DLT cycle of occurrence. We propose a design, called DICE DosefInding CumulativE, for dose escalation and deescalation according to previously observed toxicities, which aims at finding the MTD sequence MTS. We performed an extensive simulation study comparing this approach to the timetoevent continual reassessment method TITECRM and to a benchmark. In general, our approach achieved a better or comparable percentage of correct MTS selection. Moreover, we investigated the DICE prediction ability.
Endtoend Handwritten Paragraph Text Recognition Using a Vertical Attention Network ; Unconstrained handwritten text recognition remains challenging for computer vision systems. Paragraph text recognition is traditionally achieved by two models the first one for line segmentation and the second one for text line recognition. We propose a unified endtoend model using hybrid attention to tackle this task. This model is designed to iteratively process a paragraph image line by line. It can be split into three modules. An encoder generates feature maps from the whole paragraph image. Then, an attention module recurrently generates a vertical weighted mask enabling to focus on the current text line features. This way, it performs a kind of implicit line segmentation. For each text line features, a decoder module recognizes the character sequence associated, leading to the recognition of a whole paragraph. We achieve stateoftheart character error rate at paragraph level on three popular datasets 1.91 for RIMES, 4.45 for IAM and 3.59 for READ 2016. Our code and trained model weights are available at httpsgithub.comFactoDeepLearningVerticalAttentionOCR.
A MetaLearning Approach for Graph Representation Learning in MultiTask Settings ; Graph Neural Networks GNNs are a framework for graph representation learning, where a model learns to generate low dimensional node embeddings that encapsulate structural and featurerelated information. GNNs are usually trained in an endtoend fashion, leading to highly specialized node embeddings. However, generating node embeddings that can be used to perform multiple tasks with performance comparable to singletask models is an open problem. We propose a novel metalearning strategy capable of producing multitask node embeddings. Our method avoids the difficulties arising when learning to perform multiple tasks concurrently by, instead, learning to quickly i.e. with a few steps of gradient descent adapt to multiple tasks singularly. We show that the embeddings produced by our method can be used to perform multiple tasks with comparable or higher performance than classically trained models. Our method is modelagnostic and taskagnostic, thus applicable to a wide variety of multitask domains.
Spectral Unmixing With Multinomial Mixture Kernel and Wasserstein Generative Adversarial Loss ; This study proposes a novel framework for spectral unmixing by using 1D convolution kernels and spectral uncertainty. Highlevel representations are computed from data, and they are further modeled with the Multinomial Mixture Model to estimate fractions under severe spectral uncertainty. Furthermore, a new trainable uncertainty term based on a nonlinear neural network model is introduced in the reconstruction step. All uncertainty models are optimized by Wasserstein Generative Adversarial Network WGAN to improve stability and capture uncertainty. Experiments are performed on both real and synthetic datasets. The results validate that the proposed method obtains stateoftheart performance, especially for the real datasets compared to the baselines. Project page at httpsgithub.comsavasozkandscn.
Pseudolikelihoodbased Mestimation of random graphs with dependent edges and parameter vectors of increasing dimension ; An important question in statistical network analysis is how to estimate models of discrete and dependent network data with intractable likelihood functions, without sacrificing computational scalability and statistical guarantees. We demonstrate that scalable estimation of random graph models with dependent edges is possible, by establishing convergence rates of pseudolikelihoodbased Mestimators for discrete undirected graphical models with exponential parameterizations and parameter vectors of increasing dimension in singleobservation scenarios. We highlight the impact of two complex phenomena on the convergence rate phase transitions and model neardegeneracy. The main results have possible applications to discrete and dependent network, spatial, and temporal data. To showcase convergence rates, we introduce a novel class of generalized betamodels with dependent edges and parameter vectors of increasing dimension, which leverage additional structure in the form of overlapping subpopulations to control dependence. We establish convergence rates of pseudolikelihoodbased Mestimators for generalized betamodels in dense and sparsegraph settings.
TopicOriented Spoken Dialogue Summarization for Customer Service with SaliencyAware Topic Modeling ; In a customer service system, dialogue summarization can boost service efficiency by automatically creating summaries for long spoken dialogues in which customers and agents try to address issues about specific topics. In this work, we focus on topicoriented dialogue summarization, which generates highly abstractive summaries that preserve the main ideas from dialogues. In spoken dialogues, abundant dialogue noise and common semantics could obscure the underlying informative content, making the general topic modeling approaches difficult to apply. In addition, for customer service, rolespecific information matters and is an indispensable part of a summary. To effectively perform topic modeling on dialogues and capture multirole information, in this work we propose a novel topicaugmented twostage dialogue summarizer TDS jointly with a saliencyaware neural topic model SATM for topicoriented summarization of customer service dialogues. Comprehensive studies on a realworld Chinese customer service dataset demonstrated the superiority of our method against several strong baselines.
Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings ; While the success of pretrained language models has largely eliminated the need for highquality static word vectors in many NLP applications, such vectors continue to play an important role in tasks where words need to be modelled in the absence of linguistic context. In this paper, we explore how the contextualised embeddings predicted by BERT can be used to produce highquality word vectors for such domains, in particular related to knowledge base completion, where our focus is on capturing the semantic properties of nouns. We find that a simple strategy of averaging the contextualised embeddings of masked word mentions leads to vectors that outperform the static word vectors learned by BERT, as well as those from standard word embedding models, in property induction tasks. We notice in particular that masking target words is critical to achieve this strong performance, as the resulting vectors focus less on idiosyncratic properties and more on general semantic properties. Inspired by this view, we propose a filtering strategy which is aimed at removing the most idiosyncratic mention vectors, allowing us to obtain further performance gains in property induction.
Deep Bayesian Active Learning, A Brief Survey on Recent Advances ; Active learning frameworks offer efficient data annotation without remarkable accuracy degradation. In other words, active learning starts training the model with a small size of labeled data while exploring the space of unlabeled data in order to select most informative samples to be labeled. Generally speaking, representing the uncertainty is crucial in any active learning framework, however, deep learning methods are not capable of either representing or manipulating model uncertainty. On the other hand, from the real world application perspective, uncertainty representation is getting more and more attention in the machine learning community. Deep Bayesian active learning frameworks and generally any Bayesian active learning settings, provide practical consideration in the model which allows training with small data while representing the model uncertainty for further efficient training. In this paper, we briefly survey recent advances in Bayesian active learning and in particular deep Bayesian active learning frameworks.
Modeling Heterogeneous Statistical Patterns in Highdimensional Data by Adversarial Distributions An Unsupervised Generative Framework ; Since the label collecting is prohibitive and timeconsuming, unsupervised methods are preferred in applications such as fraud detection. Meanwhile, such applications usually require modeling the intrinsic clusters in highdimensional data, which usually displays heterogeneous statistical patterns as the patterns of different clusters may appear in different dimensions. Existing methods propose to model the data clusters on selected dimensions, yet globally omitting any dimension may damage the pattern of certain clusters. To address the above issues, we propose a novel unsupervised generative framework called FIRD, which utilizes adversarial distributions to fit and disentangle the heterogeneous statistical patterns. When applying to discrete spaces, FIRD effectively distinguishes the synchronized fraudsters from normal users. Besides, FIRD also provides superior performance on anomaly detection datasets compared with SOTA anomaly detection methods over 5 average AUC improvement. The significant experiment results on various datasets verify that the proposed method can better model the heterogeneous statistical patterns in highdimensional data and benefit downstream applications.
Noisy Deductive Reasoning How Humans Construct Math, and How Math Constructs Universes ; We present a computational model of mathematical reasoning according to which mathematics is a fundamentally stochastic process. That is, on our model, whether or not a given formula is deemed a theorem in some axiomatic system is not a matter of certainty, but is instead governed by a probability distribution. We then show that this framework gives a compelling account of several aspects of mathematical practice. These include 1 the way in which mathematicians generate research programs, 2 the applicability of Bayesian models of mathematical heuristics, 3 the role of abductive reasoning in mathematics, 4 the way in which multiple proofs of a proposition can strengthen our degree of belief in that proposition, and 5 the nature of the hypothesis that there are multiple formal systems that are isomorphic to physically possible universes. Thus, by embracing a model of mathematics as not perfectly predictable, we generate a new and fruitful perspective on the epistemology and practice of mathematics.
Geometric Surface Image Prediction for Image Recognition Enhancement ; This work presents a method to predict a geometric surface image from a photograph to assist in image recognition. To recognize objects, several images from different conditions are required for training a model or finetuning a pretrained model. In this work, a geometric surface image is introduced as a better representation than its color image counterpart to overcome lighting conditions. The surface image is predicted from a color image. To do so, the geometric surface image together with its color photographs are firstly trained with Generative Adversarial Networks GAN model. The trained generator model is then used to predict the geometric surface image from the input color image. The evaluation on a case study of an amulet recognition shows that the predicted geometric surface images contain less ambiguity than their color images counterpart under different lighting conditions and can be used effectively for assisting in image recognition task.
Copulabased synthetic data augmentation for machinelearning emulators ; Can we improve machinelearning ML emulators with synthetic data If data are scarce or expensive to source and a physical model is available, statistically generated data may be useful for augmenting training sets cheaply. Here we explore the use of copulabased models for generating synthetically augmented datasets in weather and climate by testing the method on a toy physical model of downwelling longwave radiation and corresponding neural network emulator. Results show that for copulaaugmented datasets, predictions are improved by up to 62 for the mean absolute error from 1.17 to 0.44 W m2.
Barking up the right tree an approach to search over molecule synthesis DAGs ; When designing new molecules with particular properties, it is not only important what to make but crucially how to make it. These instructions form a synthesis directed acyclic graph DAG, describing how a large vocabulary of simple building blocks can be recursively combined through chemical reactions to create more complicated molecules of interest. In contrast, many current deep generative models for molecules ignore synthesizability. We therefore propose a deep generative model that better represents the real world process, by directly outputting molecule synthesis DAGs. We argue that this provides sensible inductive biases, ensuring that our model searches over the same chemical space that chemists would also have access to, as well as interpretability. We show that our approach is able to model chemical space well, producing a wide range of diverse molecules, and allows for unconstrained optimization of an inherently constrained problem maximize certain chemical properties such that discovered molecules are synthesizable.
Multigrained Trajectory Graph Convolutional Networks for Habitunrelated Human Motion Prediction ; Human motion prediction is an essential part for humanrobot collaboration. Unlike most of the existing methods mainly focusing on improving the effectiveness of spatiotemporal modeling for accurate prediction, we take effectiveness and efficiency into consideration, aiming at the prediction quality, computational efficiency and the lightweight of the model. A multigrained trajectory graph convolutional networks based and lightweight framework is proposed for habitunrelated human motion prediction. Specifically, we represent human motion as multigrained trajectories, including joint trajectory and subjoint trajectory. Based on the advanced representation, multigrained trajectory graph convolutional networks are proposed to explore the spatiotemporal dependencies at the multiple granularities. Moreover, considering the righthandedness habit of the vast majority of people, a new motion generation method is proposed to generate the motion with lefthandedness, to better model the motion with less bias to the human habit. Experimental results on challenging datasets, including Human3.6M and CMU Mocap, show that the proposed model outperforms stateoftheart with less than 0.12 times parameters, which demonstrates the effectiveness and efficiency of our proposed method.
PrivateShared Disentangled Multimodal VAE for Learning of Hybrid Latent Representations ; Multimodal generative models represent an important family of deep models, whose goal is to facilitate representation learning on data with multiple views or modalities. However, current deep multimodal models focus on the inference of shared representations, while neglecting the important private aspects of data within individual modalities. In this paper, we introduce a disentangled multimodal variational autoencoder DMVAE that utilizes disentangled VAE strategy to separate the private and shared latent spaces of multiple modalities. We specifically consider the instance where the latent factor may be of both continuous and discrete nature, leading to the family of general hybrid DMVAE models. We demonstrate the utility of DMVAE on a semisupervised learning task, where one of the modalities contains partial data labels, both relevant and irrelevant to the other modality. Our experiments on several benchmarks indicate the importance of the privateshared disentanglement as well as the hybrid latent representation.
RBMFlow and DFlow Invertible Flows with Discrete Energy Base Spaces ; Efficient sampling of complex data distributions can be achieved using trained invertible flows IF, where the model distribution is generated by pushing a simple base distribution through multiple nonlinear bijective transformations. However, the iterative nature of the transformations in IFs can limit the approximation to the target distribution. In this paper we seek to mitigate this by implementing RBMFlow, an IF model whose base distribution is a Restricted Boltzmann Machine RBM with a continuous smoothing applied. We show that by using RBMFlow we are able to improve the quality of samples generated, quantified by the Inception Scores IS and Frechet Inception Distance FID, over baseline models with the same IF transformations, but with less expressive base distributions. Furthermore, we also obtain DFlow, an IF model with uncorrelated discrete latent variables. We show that DFlow achieves similar likelihoods and FIDIS scores to those of a typical IF with Gaussian base variables, but with the additional benefit that global features are meaningfully encoded as discrete labels in the latent space.
I like fish, especially dolphins Addressing Contradictions in Dialogue Modeling ; To quantify how well natural language understanding models can capture consistency in a general conversation, we introduce the DialoguE COntradiction DEtection task DECODE and a new conversational dataset containing both humanhuman and humanbot contradictory dialogues. We then compare a structured utterancebased approach of using pretrained Transformer models for contradiction detection with the typical unstructured approach. Results reveal that i our newly collected dataset is notably more effective at providing supervision for the dialogue contradiction detection task than existing NLI data including those aimed to cover the dialogue domain; ii the structured utterancebased approach is more robust and transferable on both analysis and outofdistribution dialogues than its unstructured counterpart. We also show that our best contradiction detection model correlates well with human judgments and further provide evidence for its usage in both automatically evaluating and improving the consistency of stateoftheart generative chatbots.
A path integral formulation for particle detectors the UnruhDeWitt model as a line defect ; Particle detectors are an ubiquitous tool for probing quantum fields in the context of relativistic quantum information RQI. We formulate the UnruhDeWitt UDW particle detector model in terms of the path integral formalism. The formulation is able to recover the results of the model in general globally hyperbolic spacetimes and for arbitrary detector trajectories. Integrating out the detector's degrees of freedom yields a line defect that allows one to express the transition probability in terms of Feynman diagrams. Inspired by the lightmatter interaction, we propose a gauge invariant detector model whose associated line defect is related to the derivative of a Wilson line. This is another instance where nonlocal operators in gauge theories can be interpreted as physical probes for quantum fields.
Accurate Word Representations with Universal Visual Guidance ; Word representation is a fundamental component in neural language understanding models. Recently, pretrained language models PrLMs offer a new performant method of contextualized word representations by leveraging the sequencelevel context for modeling. Although the PrLMs generally give more accurate contextualized word representations than noncontextualized models do, they are still subject to a sequence of text contexts without diverse hints for word representation from multimodality. This paper thus proposes a visual representation method to explicitly enhance conventional word embedding with multipleaspect senses from visual guidance. In detail, we build a smallscale wordimage dictionary from a multimodal seed dataset where each word corresponds to diverse related images. The texts and paired images are encoded in parallel, followed by an attention layer to integrate the multimodal representations. We show that the method substantially improves the accuracy of disambiguation. Experiments on 12 natural language understanding and machine translation tasks further verify the effectiveness and the generalization capability of the proposed approach.
The Genuine TypeV Seesaw Model Phenomenological Introduction ; We study a model which generates Majorana neutrino masses at treelevel via lowenergy effective operator with massdimension9. Introduction of such a higher dimensional operator brings down the lepton number violating mass scale to TeV making such model potentially testable at present or near future colliders. This model possesses several new SU2L fermionic multiplets, in particular, three generations of triplets, quadruplets and quintuplets, and thus a rich phenomenology at the LHC. As the lepton flavour violation arises very naturally in such setup, we put constraints on the Yukawa couplings and heavy fermion masses from the current experimental bounds on lepton flavour violating processes. We also obtain 95 CL lower bounds on the masses of the triplets, quadruplets and quintuplets using a recent CMS search for multilepton final states with 137 inverse femtobarn integrated luminosity data at 13 TeV center of mass energy. The possibility that the heavy fermions could be longlived leaving disappearing charge track signatures or displaced vertex at the future colliders like LHeC, FCChe, MATHUSLA, etc. is also discussed.
Hypocoercivity and reactiondiffusion limit for a nonlinear generationrecombination model ; A reactionkinetic model for a twospecies gas mixture undergoing pair generation and recombination reactions is considered on a flat torus. For dominant scattering with a nonmoving constanttemperature background the macroscopic limit to a reactiondiffusion system is carried out. Exponential decay to equilibrium is proven for the kinetic model by hypocoercivity estimates. This seems to be the first rigorous derivation of a nonlinear reactiondiffusion system from a kinetic model as well as the first hypocoercivity result for a nonlinear kinetic problem without smallness assumptions. The analysis profits from uniform bounds of the solution in terms of the equilibrium velocity distribution.
RealTime Optimized Ngram For Mobile Devices ; With the increasing number of mobile devices, there has been continuous research on generating optimized Language Models LMs for soft keyboard. In spite of advances in this domain, building a single LM for lowend feature phones as well as highend smartphones is still a pressing need. Hence, we propose a novel technique, Optimized Ngram OpNgram, an endtoend Ngram pipeline that utilises mobile resources efficiently for faster Word Completion WC and Next Word Prediction NWP. OpNgram applies Stupid Backoff and pruning strategies to generate a lightweight model. The LM loading time on mobile is linear with respect to model size. We observed that OpNgram gives 37 improvement in Language Model LMROM size, 76 in LMRAM size, 88 in loading time and 89 in average suggestion time as compared to SORTED array variant of BerkeleyLM. Moreover, our method shows significant performance improvement over KenLM as well.
Model for creep failure with healing ; To understand the general properties of creep failure with healing effects, we study a meanfield fiber bundle model with probabilistic rupture and rejoining processes. The dynamics of the model are determined by two factors bond breaking and the formation of new bonds. Steady states are realized due to the balance between breaking and healing beyond a critical healing factor, below which the bundle breaks completely. Correlation between the fluctuating value of strain generated in the model with time at the steadystate leads to a characteristic time that diverges in a scalefree manner as we approach the critical healing factor. Transient behaviors in strain rate also involve a power law with a nonuniversal exponent.
A Novel Prediction Approach for Exploring PM2.5 Spatiotemporal Propagation Based on Convolutional Recursive Neural Networks ; The spread of PM2.5 pollutants that endanger health is difficult to predict because it involves many atmospheric variables. These micron particles can spread rapidly from their source to residential areas, increasing the risk of respiratory disease if exposed for long periods. The prediction system of PM2.5 propagation provides more detailed and accurate information as an early warning system to reduce health impacts on the community. According to the idea of transformative computing, the approach we propose in this paper allows computation on the dataset obtained from massivescale PM2.5 sensor nodes via wireless sensor network. In the scheme, the deep learning model is implemented on the server nodes to extract spatiotemporal features on these datasets. This research was conducted by using dataset of air quality monitoring systems in Taiwan. This study presents a new model based on the convolutional recursive neural network to generate the prediction map. In general, the model is able to provide accurate predictive results by considering the bonds among measurement nodes in both spatially and temporally. Therefore, the particulate pollutant propagation of PM2.5 could be precisely monitored by using the model we propose in this paper.
SUNto SU2 symmetry breaking in quantum antiferromagnets ; We study a SU2symmetric spin32 system on a bipartite lattice close to the antiferromagnetic SU4symmetric point, which can be described by the CP3 model with a perturbation breaking the symmetry from SU4 down to SU2 and favoring the N'eel ordering. We show that the effective theory of the perturbed model is not the usual O3 nonlinear sigma model NLSM, but rather the O3times O2 NLSM. We show that in the presence of perturbation, the topological charge q of the CP3 field is connected to the O3NLSM type topological charge of the spin texture Q defined in a usual way via the unit N'eel vector by the relation q3Q, thus under the influence of the perturbation unitcharge skyrmions of CP3 model bind into triplets. We also show that in the general spinS case, symmetry breaking from SU2S1 to SU2 results in the general relation 2S QO3qCP2S between CP2S and O3 charges, so one can expect 2Smultiplet binding of skyrmions.
Mind the Gap when Conditioning Amortised Inference in Sequential LatentVariable Models ; Amortised inference enables scalable learning of sequential latentvariable models LVMs with the evidence lower bound ELBO. In this setting, variational posteriors are often only partially conditioned. While the true posteriors depend, e.g., on the entire sequence of observations, approximate posteriors are only informed by past observations. This mimics the Bayesian filter a mixture of smoothing posteriors. Yet, we show that the ELBO objective forces partiallyconditioned amortised posteriors to approximate products of smoothing posteriors instead. Consequently, the learned generative model is compromised. We demonstrate these theoretical findings in three scenarios traffic flow, handwritten digits, and aerial vehicle dynamics. Using fullyconditioned approximate posteriors, performance improves in terms of generative modelling and multistep prediction.
A Critical Study of Cottenden et al.'s An Analytical Model of the Motion of a Conformable Sheet Over a General Convex Surface in the Presence of Frictional Coupling ; In our analysis, we show that what Cottenden et al. accomplish is the derivation of the ordinary capstan equation, and a solution to a dynamic membrane with both a zeroPoisson's ratio and a zeromass density on a rigid rightcircular cone. The authors states that the capstan equation holds true for an elastic obstacle, and thus, it can be used to calculate the coefficient of friction between human skin and fabrics. However, using data that we gathered from human trials, we show that this claim cannot be substantiated as it is unwise to use the capstan equation i.e. beltfriction models in general to calculate the friction between invivo skin and fabrics. This is due to the fact that such models assume a rigid foundation, while human softtissue is deformable, and thus, a portion of the applied force to the fabric is expended on deforming the softtissue, which in turn leads to the illusion of a higher coefficient of friction when using beltfriction models.
Multimodal Variational Autoencoders for SemiSupervised Learning In Defense of ProductofExperts ; Multimodal generative models should be able to learn a meaningful latent representation that enables a coherent joint generation of all modalities e.g., images and text. Many applications also require the ability to accurately sample modalities conditioned on observations of a subset of the modalities. Often not all modalities may be observed for all training data points, so semisupervised learning should be possible. In this study, we propose a novel productofexperts PoE based variational autoencoder that have these desired properties. We benchmark it against a mixtureofexperts MoE approach and an approach of combining the modalities with an additional encoder network. An empirical evaluation shows that the PoE based models can outperform the contrasted models. Our experiments support the intuition that PoE models are more suited for a conjunctive combination of modalities.
WeChat AI ICT's Submission for DSTC9 Interactive Dialogue Evaluation Track ; We participate in the DSTC9 Interactive Dialogue Evaluation Track Gunasekara et al. 2020 subtask 1 Knowledge Grounded Dialogue and subtask 2 Interactive Dialogue. In subtask 1, we employ a pretrained language model to generate topicrelated responses and propose a response ensemble method for response selection. In subtask2, we propose a novel Dialogue Planning Model DPM to capture conversation flow in the interaction with humans. We also design an integrated opendomain dialogue system containing preprocess, dialogue model, scoring model, and postprocess, which can generate fluent, coherent, consistent, and humanlike responses. We tie 1st on human ratings and also get the highest Meteor, and Bertscore in subtask 1, and rank 3rd on interactive human evaluation in subtask 2.
Trajectory optimization for contactrich motions using implicit differential dynamic programming ; This paper presents a novel approach using sensitivity analysis for generalizing Differential Dynamic Programming DDP to systems characterized by implicit dynamics, such as those modelled via inverse dynamics and variational or implicit integrators. It leads to a more general formulation of DDP, enabling for example the use of the faster recursive NewtonEuler inverse dynamics. We leverage the implicit formulation for precise and exact contact modelling in DDP, where we focus on two contributions 1 Contact dynamics in acceleration level that enables highorder integration schemes; 2 Formulation using an invertible contact model in the forward pass and a closed form solution in the backward pass to improve the numerical resolution of contacts. The performance of the proposed framework is validated 1 by comparing implicit versus explicit DDP for the swingup of a double pendulum, and 2 by planning motions for two tasks using a single leg model making multibody contacts with the environment standing up from ground, where a priori contact enumeration is challenging, and maintaining balance under an external perturbation.
Exploring Transitivity in Neural NLI Models through Veridicality ; Despite the recent success of deep neural networks in natural language processing, the extent to which they can demonstrate humanlike generalization capacities for natural language understanding remains unclear. We explore this issue in the domain of natural language inference NLI, focusing on the transitivity of inference relations, a fundamental property for systematically drawing inferences. A model capturing transitivity can compose basic inference patterns and draw new inferences. We introduce an analysis method using synthetic and naturalistic NLI datasets involving clauseembedding verbs to evaluate whether models can perform transitivity inferences composed of veridical inferences and arbitrary inference types. We find that current NLI models do not perform consistently well on transitivity inference tasks, suggesting that they lack the generalization capacity for drawing composite inferences from provided training examples. The data and code for our analysis are publicly available at httpsgithub.comveryplumingtransitivity.
Transformer Based Deliberation for TwoPass Speech Recognition ; Interactive speech recognition systems must generate words quickly while also producing accurate results. Twopass models excel at these requirements by employing a firstpass decoder that quickly emits words, and a secondpass decoder that requires more context but is more accurate. Previous work has established that a deliberation network can be an effective secondpass model. The model attends to two kinds of inputs at once encoded audio frames and the hypothesis text from the firstpass model. In this work, we explore using transformer layers instead of longshort term memory LSTM layers for deliberation rescoring. In transformer layers, we generalize the encoderdecoder attention to attend to both encoded audio and firstpass text hypotheses. The output context vectors are then combined by a merger layer. Compared to LSTMbased deliberation, our best transformer deliberation achieves 7 relative word error rate improvements along with a 38 reduction in computation. We also compare against nondeliberation transformer rescoring, and find a 9 relative improvement.
Siamese Labels Auxiliary Learning ; In deep learning, auxiliary training has been widely used to assist the training of models. During the training phase, using auxiliary modules to assist training can improve the performance of the model. During the testing phase, auxiliary modules can be removed, so the test parameters are not increased. In this paper, we propose a novel auxiliary training method, Siamese Labels Auxiliary Learning SiLa. Unlike Deep Mutual Learning DML, SiLa emphasizes auxiliary learning and can be easily combined with DML. In general, the main work of this paper include 1 propose SiLa Learning, which improves the performance of common models without increasing test parameters; 2 compares SiLa with DML and proves that SiLa can improve the generalization of the model; 3 SiLa is applied to Dynamic Neural Networks, and proved that SiLa can be used for various types of network structures.
Syntactic and Semanticdriven Learning for Open Information Extraction ; One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is the need for large labelled corpora. The diversity of open domain corpora and the variety of natural language expressions further exacerbate this problem. In this paper, we propose a syntactic and semanticdriven learning approach, which can learn neural open IE models without any humanlabelled data by leveraging syntactic and semantic knowledge as noisier, higherlevel supervisions. Specifically, we first employ syntactic patterns as data labelling functions and pretrain a base model using the generated labels. Then we propose a syntactic and semanticdriven reinforcement learning algorithm, which can effectively generalize the base model to open situations with high accuracy. Experimental results show that our approach significantly outperforms the supervised counterparts, and can even achieve competitive performance to supervised stateoftheart SoA model
Learning to Predict Vehicle Trajectories with Modelbased Planning ; Predicting the future trajectories of onroad vehicles is critical for autonomous driving. In this paper, we introduce a novel prediction framework called PRIME, which stands for Prediction with Modelbased Planning. Unlike recent prediction works that utilize neural networks to model scene context and produce unconstrained trajectories, PRIME is designed to generate accurate and feasibilityguaranteed future trajectory predictions. PRIME guarantees the trajectory feasibility by exploiting a modelbased generator to produce future trajectories under explicit constraints and enables accurate multimodal prediction by utilizing a learningbased evaluator to select future trajectories. We conduct experiments on the largescale Argoverse Motion Forecasting Benchmark, where PRIME outperforms the stateoftheart methods in prediction accuracy, feasibility, and robustness under imperfect tracking.
Relationshipbased Neural Baby Talk ; Understanding interactions between objects in an image is an important element for generating captions. In this paper, we propose a relationshipbased neural baby talk RNBT model to comprehensively investigate several types of pairwise object interactions by encoding each image via three different relationshipbased graph attention networks GATs. We study three main relationships textitspatial relationships to explore geometric interactions, textitsemantic relationships to extract semantic interactions, and textitimplicit relationships to capture hidden information that could not be modelled explicitly as above. We construct three relationship graphs with the objects in an image as nodes, and the mutual relationships of pairwise objects as edges. By exploring features of neighbouring regions individually via GATs, we integrate different types of relationships into visual features of each node. Experiments on COCO dataset show that our proposed RNBT model outperforms stateoftheart models trained on COCO dataset in three image caption generation tasks.
Helical magnetic fields from Riemann coupling lead to baryogenesis ; The spectrum of energy density fluctuations, baryon asymmetry, and coherent largescale magnetic fields are the three observables that provide crucial information on physics at very high energies. Inflation can only provide a mechanism to explain the density perturbations, and the origin of primordial magnetic fields and baryon asymmetry require physics beyond the standard models of cosmology and particle physics. In this work, we show that the mechanism that leads to primordial helical fields also leads to baryogenesis at the beginning of the radiationdominated epoch. The model we consider here consists of mass dimension 6 operators that include Riemann coupling between gravity and electromagnetic field without extending the Standard Model of particle physics. We explicitly show that the generation of primordial helical magnetic fields leads to baryogenesis. We further show that the model predicts the observed amount of baryon asymmetry of the Universe for a range of reheating temperatures consistent with the observations.
Inference for Generative Capsule Models ; Capsule networks see e.g. Hinton et al., 2018 aim to encode knowledge and reason about the relationship between an object and its parts. In this paper we specify a emphgenerative model for such data, and derive a variational algorithm for inferring the transformation of each object and the assignments of observed parts to the objects. We apply this model to i data generated from multiple geometric objects like squares and triangles constellations, and ii data from a partsbased model of faces. Recent work by Kosiorek et al. 2019 has used amortized inference via stacked capsule autoencoders SCAEs to tackle this problem our results show that we significantly outperform them where we can make comparisons on the constellations data.
VDSM Unsupervised Video Disentanglement with StateSpace Modeling and Deep Mixtures of Experts ; Disentangled representations support a range of downstream tasks including causal reasoning, generative modeling, and fair machine learning. Unfortunately, disentanglement has been shown to be impossible without the incorporation of supervision or inductive bias. Given that supervision is often expensive or infeasible to acquire, we choose to incorporate structural inductive bias and present an unsupervised, deep StateSpaceModel for Video Disentanglement VDSM. The model disentangles latent timevarying and dynamic factors via the incorporation of hierarchical structure with a dynamic prior and a Mixture of Experts decoder. VDSM learns separate disentangled representations for the identity of the object or person in the video, and for the action being performed. We evaluate VDSM across a range of qualitative and quantitative tasks including identity and dynamics transfer, sequence generation, Fr'echet Inception Distance, and factor classification. VDSM provides stateoftheart performance and exceeds adversarial methods, even when the methods use additional supervision.
Elliptic solutions to matrix KP hierarchy and spin generalization of elliptic CalogeroMoser model ; We consider solutions of the matrix KP hierarchy that are elliptic functions of the first hierarchical time t1x. It is known that poles xi and matrix residues at the poles rhoialpha betaaialphabibeta of such solutions as functions of the time t2 move as particles of spin generalization of the elliptic CalogeroMoser model elliptic GibbonsHermsen model. In this paper we establish the correspondence with the spin elliptic CalogeroMoser model for the whole matrix KP hierarchy. Namely, we show that the dynamics of poles and matrix residues of the solutions with respect to the kth hierarchical time of the matrix KP hierarchy is Hamiltonian with the Hamiltonian Hk obtained via an expansion of the spectral curve near the marked points. The Hamiltonians are identified with the Hamiltonians of the elliptic spin CalogeroMoser system with coordinates xi and spin degrees of freedom aialpha, , bibeta.
Optimal stratification of survival data via Bayesian nonparametric mixtures ; The stratified proportional hazards model represents a simple solution to account for heterogeneity within the data while keeping the multiplicative effect on the hazard function. Strata are typically defined a priori by resorting to the values taken by a categorical covariate. A general framework is proposed, which allows for the stratification of a generic accelerated life time model, including as a special case the Weibull proportional hazard model. The stratification is determined a posteriori by taking into account that strata might be characterized by different baseline survivals as well as different effects of the predictors. This is achieved by considering a Bayesian nonparametric mixture model and the posterior distribution it induces on the space of data partitions. The optimal stratification is then identified by means of the variation of information criterion and, in turn, stratumspecific inference is carried out. The performance of the proposed method and its robustness to the presence of rightcensored observations are investigated by means of an extensive simulation study. A further illustration is provided by the analysis of a data set extracted from the University of Massachusetts AIDS Research Unit IMPACT Study.
Automatic Generation of Contrast Sets from Scene Graphs Probing the Compositional Consistency of GQA ; Recent works have shown that supervised models often exploit data artifacts to achieve good test scores while their performance severely degrades on samples outside their training distribution. Contrast sets Gardneret al., 2020 quantify this phenomenon by perturbing test samples in a minimal way such that the output label is modified. While most contrast sets were created manually, requiring intensive annotation effort, we present a novel method which leverages rich semantic input representation to automatically generate contrast sets for the visual question answering task. Our method computes the answer of perturbed questions, thus vastly reducing annotation cost and enabling thorough evaluation of models' performance on various semantic aspects e.g., spatial or relational reasoning. We demonstrate the effectiveness of our approach on the GQA dataset and its semantic scene graph image representation. We find that, despite GQA's compositionality and carefully balanced label distribution, two highperforming models drop 1317 in accuracy compared to the original test set. Finally, we show that our automatic perturbation can be applied to the training set to mitigate the degradation in performance, opening the door to more robust models.
Generalized infinite factorization models ; Factorization models express a statistical object of interest in terms of a collection of simpler objects. For example, a matrix or tensor can be expressed as a sum of rankone components. However, in practice, it can be challenging to infer the relative impact of the different components as well as the number of components. A popular idea is to include infinitely many components having impact decreasing with the component index. This article is motivated by two limitations of existing methods 1 lack of careful consideration of the within component sparsity structure; and 2 no accommodation for grouped variables and other nonexchangeable structures. We propose a general class of infinite factorization models that address these limitations. Theoretical support is provided, practical gains are shown in simulation studies, and an ecology application focusing on modelling bird species occurrence is discussed.
Copula Averaging for Tail Dependence in Insurance Claims Data ; Analysing dependent risks is an important task for insurance companies. A dependency is reflected in the fact that information about one random variable provides information about the likely distribution of values of another random variable. Insurance companies in particular must investigate such dependencies between different lines of business and the effects that an extreme loss event, such as an earthquake or hurricane, has across multiple lines of business simultaneously. Copulas provide a popular modelbased approach to analysing the dependency between risks, and the coefficient of tail dependence is a measure of dependence for extreme losses. Besides commonly used empirical estimators for estimating the tail dependence coefficient, copula fitting can lead to estimation of such coefficients directly or can verify their existence. Generally, a range of copula models is available to fit a data set well, leading to multiple different tail dependence results; a method based on Bayesian model averaging is designed to obtain a unified estimate of tail dependence. In this article, this modelbased coefficient estimation method is illustrated through a variety of copula fitting approaches and results are presented for several simulated data sets and also a real general insurance loss data set.
An augmentation strategy to mimic multiscanner variability in MRI ; Most publicly available brain MRI datasets are very homogeneous in terms of scanner and protocols, and it is difficult for models that learn from such data to generalize to multicenter and multiscanner data. We propose a novel data augmentation approach with the aim of approximating the variability in terms of intensities and contrasts present in real world clinical data. We use a Gaussian Mixture Model based approach to change tissue intensities individually, producing new contrasts while preserving anatomical information. We train a deep learning model on a single scanner dataset and evaluate it on a multicenter and multiscanner dataset. The proposed approach improves the generalization capability of the model to other scanners not present in the training data.
MetaAdversarial Inverse Reinforcement Learning for Decisionmaking Tasks ; Learning from demonstrations has made great progress over the past few years. However, it is generally data hungry and task specific. In other words, it requires a large amount of data to train a decent model on a particular task, and the model often fails to generalize to new tasks that have a different distribution. In practice, demonstrations from new tasks will be continuously observed and the data might be unlabeled or only partially labeled. Therefore, it is desirable for the trained model to adapt to new tasks that have limited data samples available. In this work, we build an adaptable imitation learning model based on the integration of Metalearning and Adversarial Inverse Reinforcement Learning MetaAIRL. We exploit the adversarial learning and inverse reinforcement learning mechanisms to learn policies and reward functions simultaneously from available training tasks and then adapt them to new tasks with the metalearning framework. Simulation results show that the adapted policy trained with MetaAIRL can effectively learn from limited number of demonstrations, and quickly reach the performance comparable to that of the experts on unseen tasks.
On the limits of algorithmic prediction across the globe ; The impact of predictive algorithms on people's lives and livelihoods has been noted in medicine, criminal justice, finance, hiring and admissions. Most of these algorithms are developed using data and human capital from highly developed nations. We tested how well predictive models of human behavior trained in a developed country generalize to people in less developed countries by modeling global variation in 200 predictors of academic achievement on nationally representative student data for 65 countries. Here we show that stateoftheart machine learning models trained on data from the United States can predict achievement with high accuracy and generalize to other developed countries with comparable accuracy. However, accuracy drops linearly with national development due to global variation in the importance of different achievement predictors, providing a useful heuristic for policymakers. Training the same model on national data yields high accuracy in every country, which highlights the value of local data collection.
Phenomenology of the Zee model for Dirac neutrinos and general neutrino interactions ; The Zee model for Dirac neutrinos is one of the simplest models featuring oneloop Dirac neutrino masses. The interactions between the new scalars two singlycharged fields and neutrinos induce general neutrino interactions GNI which, as a generalisation of the non standard neutrino interactions, constitute an additional tool to probe models beyond the SM like this. In this work, we consider a U1BL gauge symmetry as the responsible for the Diracness of the neutrinos and the radiative character of the neutrino masses. We determine the viable parameter space consistent with neutrino oscillation data, leptonic rare decays and collider constraints, and establish the most relevant experimental prospects regarding lepton flavor violation searches and GNI in future solar neutrino experiments.
Context Modeling in 3D Human Pose Estimation A Unified Perspective ; Estimating 3D human pose from a single image suffers from severe ambiguity since multiple 3D joint configurations may have the same 2D projection. The stateoftheart methods often rely on context modeling methods such as pictorial structure model PSM or graph neural network GNN to reduce ambiguity. However, there is no study that rigorously compares them side by side. So we first present a general formula for context modeling in which both PSM and GNN are its special cases. By comparing the two methods, we found that the endtoend training scheme in GNN and the limb length constraints in PSM are two complementary factors to improve results. To combine their advantages, we propose ContextPose based on attention mechanism that allows enforcing soft limb length constraints in a deep network. The approach effectively reduces the chance of getting absurd 3D pose estimates with incorrect limb lengths and achieves stateoftheart results on two benchmark datasets. More importantly, the introduction of limb length constraints into deep networks enables the approach to achieve much better generalization performance.