text
stringlengths
62
2.94k
Interfaces with internal structures in generalized rockpaperscissors models ; In this work we investigate the development of stable dynamical structures along interfaces separating domains belonging to enemy partnerships, in the context of cyclic predatorprey models with an even number of species N ge 8. We use both stochastic and field theory simulations in one and two spatial dimensions, as well as analytical arguments, to describe the association at the interfaces of mutually neutral individuals belonging to enemy partnerships and to probe their role in the development of the dynamical structures at the interfaces. We identify an interesting behaviour associated to the symmetric or asymmetric evolution of the interface profiles depending on whether N2 is odd or even, respectively. We also show that the macroscopic evolution of the interface network is not very sensitive internal structure of the interfaces. Although this work focus on cyclic predator preymodels with an even number of species, we argue that the results are expected to be quite generic in the context of spatial stochastic MayLeonard models.
Flavour models with Dirac and fake gluinos ; In the context of supersymmetric models where the gauginos may have both Majorana and Dirac masses we investigate the general constraints from flavourchanging processes on the scalar mass matrices. One finds that the chiralityflip suppression of flavourchanging effects usually invoked in the pure Dirac case holds in the mass insertion approximation but not in the general case, and fails in particular for inverted hierarchy models. We quantify the constraints in several flavour models which correlate fermion and scalar superpartner masses. We also discuss the limit of very large Majorana gaugino masses compared to the chiral adjoint and Dirac masses, where the remaining light eigenstate is the fake gaugino, including the consequences of suppressed couplings to quarks beyond flavour constraints.
Multisymplectic effective General Boundary Field Theory ; The transfer matrix in lattice field theory connects the covariant and the initial data frameworks; in spin foam models, it can be written as a composition of elementary cellular amplitudespropagators. We present a framework for discrete spacetime classical field theory in which solutions to the field equations over elementary spacetime cells may be amalgamated if they satisfy simple gluing conditions matching the composition rules of cellular amplitudes in spin foam models. Furthermore, the formalism is endowed with a multisymplectic structure responsible for local conservation laws. Some models within our framework are effective theories modeling a system at a given scale. Our framework allows us to study coarse graining and the continuum limit.
Evidence synthesis for count distributions based on heterogeneous and incomplete aggregated data ; The analysis of count data is commonly done using Poisson models. Negative binomial models are a straightforward and readily motivated generalization for the case of overdispersed data, i.e., when the observed variance is greater than expected under a Poissonian model. Rate and overdispersion parameters then need to be considered jointly, which in general is not trivial. Here we are concerned with evidence synthesis in the case where the reporting of data is rather heterogeneous, i.e., events are reported either in terms of mean event counts, the proportion of eventfree patients, or rate estimates and standard errors. Either figure carries some information about the relevant parameters, and it is the joint modeling that allows for coherent inference on the parameters of interest. The methods are motivated and illustrated by a systematic review in chronic obstructive pulmonary disease.
Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs ; We present an asymptotic analysis of Viterbi Training VT and contrast it with a more conventional Maximum Likelihood ML approach to parameter estimation in Hidden Markov Models. While ML estimator works by locally maximizing the likelihood of the observed data, VT seeks to maximize the probability of the most likely hidden state sequence. We develop an analytical framework based on a generating function formalism and illustrate it on an exactly solvable model of HMM with one unambiguous symbol. For this particular model the ML objective function is continuously degenerate. VT objective, in contrast, is shown to have only finite degeneracy. Furthermore, VT converges faster and results in sparser simpler models, thus realizing an automatic Occam's razor for HMM learning. For more general scenario VT can be worse compared to ML but still capable of correctly recovering most of the parameters.
Quantum speed limit for arbitrary initial states ; We investigate the generic bound on the minimal evolution time of the open dynamical quantum system. This quantum speed limit time is applicable to both mixed and pure initial states. We then apply this result to the damped JaynesCummings model and the Ohimclike dephasing model starting from a general timeevolution state. The bound of this timedependent state at any point in time can be found. For the damped JaynesCummings model, the corresponding bound first decreases and then increases in the Markovian dynamics. While in the nonMarkovian regime, the speed limit time shows an interesting periodic oscillatory behavior. For the case of Ohimclike dephasing model, this bound would be gradually trapped to a fixed value. In addition, the roles of the relativistic effects on the speed limit time for the observer in noninertial frames are discussed.
Cosmological constraints from largescale structure growth rate measurements ; We compile a list of 14 independent measurements of largescale structure growth rate between redshifts 0.067 leq z leq 0.8 and use this to place constraints on model parameters of constant and timeevolving generalrelativistic dark energy cosmologies. With the assumption that gravity is wellmodeled by general relativity, we discover that growthrate data provide restrictive cosmological parameter constraints. In combination with type Ia supernova apparent magnitude versus redshift data and Hubble parameter measurements, the growth rate data are consistent with the standard spatiallyflat LambdaCDM model, as well as with mildly evolving dark energy density cosmological models.
Planck 2013 and Superconformal Symmetry ; We explain why the concept of a spontaneously broken superconformal symmetry is useful to describe inflationary models favored by the Planck. Nonminimal coupling of complex scalars to curvature, NX, X R, is compulsory for superconformal symmetry. Here N is the Kahler potential of the embedding moduli space, including the inflaton and the conformon. It appears that such a nonminimal coupling allows generic chaotic models of inflation to reach an agreement with the observable ns,r values. We describe here the superconformal versions of the cosmological attractors whose bosonic part was presented in lectures of A. Linde in this volume. A distinguishing feature of this class of models is that they tend to lead to very similar predictions which are not very sensitive with respect to strong modifications of the theory. The superconformal symmetry underlying supergravity allows a universal description of a large class of models which agree with observations and predict the tensor to scalar ratio 103 r 101.
Mixing asymmetries in B meson systems, the D0 likesign dimuon asymmetry and generic New Physics ; The measurement of a large likesign dimuon asymmetry AbSL by the D0 experiment at the Tevatron departs noticeably from Standard Model expectations and it may be interpreted as a hint of physics beyond the Standard Model contributing to Delta Bneq 0 transitions. In this work we analyse how the natural suppression of AbSL in the SM can be circumvented by New Physics. We consider generic Standard Model extensions where the charged current mixing matrix is enlarged with respect to the usual 3times 3 unitary CabibboKobayashiMaskawa matrix, and show how, within this framework, a significant enhancement over Standard Model expectations for AbSL is easily reachable through enhancements of the semileptonic asymmetries AdSL and AsSL of both B0d bar B0d and B0s bar B0s systems. Despite being insufficient to reproduce the D0 measurement, such deviations from SM expectations may be probed by the LHCb experiment.
Intrinsically Motivated Learning of Visual Motion Perception and Smooth Pursuit ; We extend the framework of efficient coding, which has been used to model the development of sensory processing in isolation, to model the development of the perceptionaction cycle. Our extension combines sparse coding and reinforcement learning so that sensory processing and behavior codevelop to optimize a shared intrinsic motivational signal the fidelity of the neural encoding of the sensory input under resource constraints. Applying this framework to a model system consisting of an active eye behaving in a time varying environment, we find that this generic principle leads to the simultaneous development of both smooth pursuit behavior and model neurons whose properties are similar to those of primary visual cortical neurons selective for different directions of visual motion. We suggest that this general principle may form the basis for a unified and integrated explanation of many perceptionaction loops.
Azimuthal Jet Tomography at RHIC and LHC ; A generic jetenergy loss model that is coupled to stateoftheart hydrodynamic fields and interpolates between a wide class of running coupling pQCDbased and AdSCFTinspired models is compared to recent data on the azimuthal and transverse momentum dependence of highpT pion nuclear modification factors and highpT elliptic flow measured at RHIC and LHC. We find that RHIC data are surprisingly consistent with various scenarios considered. However, extrapolations to LHC energies favor running coupling pQCDbased models of jetenergy loss. While conformal holographic models are shown to be inconsistent with data, recent nonconformal generalizations of AdS holography may provide an alternative description.
Bayesian Inference for NMR Spectroscopy with Applications to Chemical Quantification ; Nuclear magnetic resonance NMR spectroscopy exploits the magnetic properties of atomic nuclei to discover the structure, reaction state and chemical environment of molecules. We propose a probabilistic generative model and inference procedures for NMR spectroscopy. Specifically, we use a weighted sum of trigonometric functions undergoing exponential decay to model free induction decay FID signals. We discuss the challenges in estimating the components of this general model amplitudes, phase shifts, frequencies, decay rates, and noise variances and offer practical solutions. We compare with conventional Fourier transform spectroscopy for estimating the relative concentrations of chemicals in a mixture, using synthetic and experimentally acquired FID signals. We find the proposed model is particularly robust to low signal to noise ratios SNR, and overlapping peaks in the Fourier transform of the FID, enabling accurate predictions e.g., 1 sensitivity at low SNR which are not possible with conventional spectroscopy 5 sensitivity.
Simultaneous Inference for Highdimensional Linear Models ; This paper proposes a bootstrapassisted procedure to conduct simultaneous inference for high dimensional sparse linear models based on the recent desparsifying Lasso estimator van de Geer et al. 2014. Our procedure allows the dimension of the parameter vector of interest to be exponentially larger than sample size, and it automatically accounts for the dependence within the desparsifying Lasso estimator. Moreover, our simultaneous testing method can be naturally coupled with the margin screening Fan and Lv 2008 to enhance its power in sparse testing with a reduced computational cost, or with the stepdown method Romano and Wolf 2005 to provide a strong control for the familywise error rate. In theory, we prove that our simultaneous testing procedure asymptotically achieves the prespecified significance level, and enjoys certain optimality in terms of its power even when the model errors are nonGaussian. Our general theory is also useful in studying the support recovery problem. To broaden the applicability, we further extend our main results to generalized linear models with convex loss functions. The effectiveness of our methods is demonstrated via simulation studies.
Embeddings of the New Massive Gravity ; Using different types of embeddings of equations of motion we investigate the existence of generalizations of the New Massive Gravity NMG model with the same particle content massive gravitons. By using the Weyl symmetry as a guiding principle for the embeddings we show that the Noether gauge embedding approach leads us to a sixth order model in derivatives with either a massive or a massless ghost. If the Weyl symmetry is implemented by means of a Stueckelberg field we obtain a new scalartensor model for massive gravitons. It is ghost free and Weyl invariant at linearized level. The model can be nonlinearly completed into a scalar field coupled to the NMG theory. The elimination of the scalar field leads to a nonlocal modification of the NMG. We also prove to all orders in derivatives that there is no local, ghost free embedding of the linearized NMG equations of motion around Minkowski space when written in terms of one symmetric tensor. Regarding that point, NMG differs from the FierzPauli theory, since in later case we can replace the EinsteinHilbert action by specific fR,Box, R generalizations and still keep the theory ghost free at linearized level.
Structure of Kahler potential for Dterm inflationary attractor models ; Minimal chaotic models of Dterm inflation predicts too large primordial tensor perturbations. Although it can be made consistent with observations utilizing higher order terms in the Kahler potential, expansion is not controlled in the absence of symmetries. We comprehensively study the conditions of Kahler potential for Dterm plateautype potentials and discuss its symmetry. They include the alphaattractor model with a massive vector supermultiplet and its generalization leading to pole inflation of arbitrary order. We extend the models so that it can describe Coulomb phase, gauge anomaly is cancelled, and fields other than inflaton are stabilized during inflation. We also point out a generic issue for largefield Dterm inflation that the masses of the noninflaton fields tend to exceed the Planck scale.
Guard Your Daggers and Traces Properties of Guarded Corecursion ; Motivated by the recent interest in models of guarded corecursion, we study their equational properties. We formulate axioms for guarded fixpoint operators generalizing the axioms of iteration theories of Bloom and 'Esik. Models of these axioms include both standard e.g., cpobased models of iteration theories and models of guarded recursion such as complete metric spaces or the topos of trees studied by Birkedal et al. We show that the standard result on the satisfaction of all Conway axioms by a unique dagger operation generalizes to the guarded setting. We also introduce the notion of guarded trace operator on a category, and we prove that guarded trace and guarded fixpoint operators are in onetoone correspondence. Our results are intended as first steps leading, hopefully, towards future description of classifying theories for guarded recursion.
ActionAffect Classification and Morphing using MultiTask Representation Learning ; Most recent work focused on affect from facial expressions, and not as much on body. This work focuses on body affect analysis. Affect does not occur in isolation. Humans usually couple affect with an action in natural interactions; for example, a person could be talking and smiling. Recognizing body affect in sequences requires efficient algorithms to capture both the micro movements that differentiate between happy and sad and the macro variations between different actions. We depart from traditional approaches for timeseries data analytics by proposing a multitask learning model that learns a shared representation that is wellsuited for actionaffect classification as well as generation. For this paper we choose Conditional Restricted Boltzmann Machines to be our building block. We propose a new model that enhances the CRBM model with a factored multitask component to become MultiTask Conditional Restricted Boltzmann Machines MTCRBMs. We evaluate our approach on two publicly available datasets, the Body Affect dataset and the Tower Game dataset, and show superior classification performance improvement over the stateoftheart, as well as the generative abilities of our model.
A Model of Charge Transfer Excitons Diffusion, Spin Dynamics, and Magnetic Field Effects ; In this letter we explore how the microscopic dynamics of charge transfer CT excitons are influenced by the presence of an external magnetic field in disordered molecular semiconductors. This influence is driven by the dynamic interplay between the spin and spatial degrees of freedom of the electronhole pair. To account for this interplay we have developed a numerical framework that combines a traditional model of quantum spin dynamics with a coarsegrained model of stochastic charge transport. This combination provides a general and efficient methodology for simulating the effects of magnetic field on CT state dynamics, therefore providing a basis for revealing the microscopic origin of experimentally observed magnetic field effects. We demonstrate that simulations carried out on our model are capable of reproducing experimental results as well as generating theoretical predictions related to the efficiency of organic electronic materials.
Crossmodal Supervision for Learning Active Speaker Detection in Video ; In this paper, we show how to use audio to supervise the learning of active speaker detection in video. Voice Activity Detection VAD guides the learning of the visionbased classifier in a weakly supervised manner. The classifier uses spatiotemporal features to encode upper body motion facial expressions and gesticulations associated with speaking. We further improve a generic model for active speaker detection by learning person specific models. Finally, we demonstrate the online adaptation of generic models learnt on one dataset, to previously unseen people in a new dataset, again using audio VAD for weak supervision. The use of temporal continuity overcomes the lack of clean training data. We are the first to present an active speaker detection system that learns on one audiovisual dataset and automatically adapts to speakers in a new dataset. This work can be seen as an example of how the availability of multimodal data allows us to learn a model without the need for supervision, by transferring knowledge from one modality to another.
DeepSoft A vision for a deep model of software ; Although software analytics has experienced rapid growth as a research area, it has not yet reached its full potential for wide industrial adoption. Most of the existing work in software analytics still relies heavily on costly manual feature engineering processes, and they mainly address the traditional classification problems, as opposed to predicting future events. We present a vision for emphDeepSoft, an emphendtoend generic framework for modeling software and its development process to predict future risks and recommend interventions. DeepSoft, partly inspired by human memory, is built upon the powerful deep learningbased Long Short Term Memory architecture that is capable of learning longterm temporal dependencies that occur in software evolution. Such deep learned patterns of software can be used to address a range of challenging problems such as code and task recommendation and prediction. DeepSoft provides a new approach for research into modeling of source code, risk prediction and mitigation, developer modeling, and automatically generating code patches from bug reports.
Linear Density Perturbations in Multifield Coupled Quintessence ; We study the behaviour of linear perturbations in multifield coupled quintessence models. Using gauge invariant linear cosmological perturbation theory we provide the full set of governing equations for this class of models, and solve the system numerically. We apply the numerical code to generate growth functions for various examples, and compare these both to the standard LambdaCDM model and to current and future observational bounds. Finally, we examine the applicability of the small scale approximation, often used to calculate growth functions in quintessence models, in light of upcoming experiments such as SKA and Euclid. We find the deviation of the full equation results for large k modes from the approximation exceeds the experimental uncertainty for these future surveys. The numerical code, PYESSENCE, written in Python will be publicly available.
Emergent organization in a model market ; We study the collective behavior of interacting agents in a simple model of market economics originally introduced by Norrelykke and Bak. A general theoretical framework for interacting traders on an arbitrary network is presented, with the interaction consisting of buying namely, consumption and selling namely, production of commodities. Extremal dynamics is introduced by having the agent with least profit in the market readjust prices, causing the market to selforganize. We study this model market on regular lattices in twodimension as well as on random complex networks; in the critical state fluctuations in an activity signal exhibit properties that are characteristic of avalanches observed in models of selforganized criticality, and these can be described by powerlaw distributions.
Leptonflavour violation in a PatiSalam model with gauged flavour symmetry ; Combining PatiSalam PS and flavour symmetries in a renormalisable setup, we devise a scenario which produces realistic masses for the charged leptons. Flavoursymmetry breaking scalar fields in the adjoint representations of the PS gauge group are responsible for generating different flavour structures for up and downtype quarks as well as for leptons. The model is characterised by new heavy fermions which mix with the Standard Model quarks and leptons. In particular, the partners for the third fermion generation induce sizeable sources of flavour violation. Focusing on the chargedlepton sector, we scrutinise the model with respect to its implications for leptonflavour violating processes such as mu rightarrow egamma, murightarrow 3e and muon conversion in nuclei.
Reheating predictions in Gravity Theories with Derivative Coupling ; We investigate the inflationary predictions of a simple Horndeski theory where the inflaton scalar field has a nonminimal derivative coupling NMDC to the Einstein tensor. The NMDC is very motivated for the construction of successful models for inflation, nevertheless its inflationary predictions are not observationally distinct. We show that it is possible to probe the effects of the NMDC on the CMB observables by taking into account both the dynamics of the inflationary slowroll phase and the subsequent reheating. We perform a comparative study between representative inflationary models with canonical fields minimally coupled to gravity and models with NMDC. We find that the inflation models with dominant NMDC generically predict a higher reheating temperature and a different range for the tilt of the scalar perturbation spectrum ns and scalartotensor ratio r, potentially testable by current and future CMB experiments.
LevyVasicek Models and the LongBond Return Process ; The classical derivation of the wellknown Vasicek model for interest rates is reformulated in terms of the associated pricing kernel. An advantage of the pricing kernel method is that it allows one to generalize the construction to the L'evyVasicek case, avoiding issues of market incompleteness. In the L'evyVasicek model the short rate is taken in the realworld measure to be a meanreverting process with a general onedimensional L'evy driver admitting exponential moments. Expressions are obtained for the L'evyVasicek bond prices and interest rates, along with a formula for the return on a unit investment in the long bond, defined by Lt limT rightarrow infty PtT P0T, where PtT is the price at time t of a Tmaturity discount bond. We show that the pricing kernel of a L'evyVasicek model is uniformly integrable if and only if the long rate of interest is strictly positive.
Competing nematic interactions in a generalized XY model in two and three dimensions ; We study a generalization of the XY model with an additional nematiclike term through extensive numerical simulations and finitesize techniques, both in two and three dimensions. While the original model favors local alignment, the extra term induces angles of 2piq between neighboring spins. We focus here on the q8 case while presenting new results for other values of q as well whose phase diagram is much richer than the well known q2 case. In particular, the model presents not only continuous, standard transitions between BerezinskiiKosterlitzThouless BKT phases as in q2, but also infinite order transitions involving intermediate, competition driven phases absent for q2 and 3. Besides presenting multiple transitions, our results show that having vortices decoupling at a transition is not a suficient condition for it to be of BKT type.
The cancellation mechanism in the predictions of electric dipole moments ; The interpretation of the baryon asymmetry of the Universe necessitates the CP violation beyond the Standard Model SM. We present a general cancellation mechanism in the theoretical predictions of the electron electric dipole moments EDM, quark chromoEDMs, and Weinberg operators. A relative large CP violation in the Higgs sector is allowed by the current electron EDM constraint released by the ACME collaboration in 2013, and the recent 199Hg EDM experiment. The cancellation mechanism can be induced by the mass splitting of heavy Higgs bosons around simmathcalO0.11 GeV, and the extent of the mass degeneracy determines the magnitude of the CPviolating phase. We explicate this point by investigating the CPviolating twoHiggsdoublet model and the minimal supersymmetric Standard Model. The cancellation mechanism is general when there are CP violation and mixing in the Higgs sector of new physics models. The CPviolating phases in this scenario can be excluded or detected by the projected 225Ra EDM experiments with precision reaching sim1028,ecdotrm cm, as well as the future colliders.
The Hackbusch conjecture on tensor formats part two ; We prove a conjecture of W.Hackbusch in a bigger generality than in our previous article. Here we consider Tensor Train TT model with an arbitrary number of leaves and a corresponding almost binary tree for Hierarchical Tucker HT model, i.e. the deepest tree with the same number of leaves. Our main result is an algorithm that computes the flattening rank of a generic tensor in a Tensor Network State TNS model on a given tree with respect to any flattening coming from combinatorics of the space. The methods also imply that the tensor rank which is also called CPrank of most tensors in a TNS model grows exponentially with the growth of the number of leaves for any shape of the tree.
Parameter estimators of random intersection graphs with thinned communities ; This paper studies a statistical network model generated by a large number of randomly sized overlapping communities, where any pair of nodes sharing a community is linked with probability q via the community. In the special case with q1 the model reduces to a random intersection graph which is known to generate high levels of transitivity also in the sparse context. The parameter q adds a degree of freedom and leads to a parsimonious and analytically tractable network model with tunable density, transitivity, and degree fluctuations. We prove that the parameters of this model can be consistently estimated in the large and sparse limiting regime using moment estimators based on partially observed densities of links, 2stars, and triangles.
Realtime Prediction of IntermediateHorizon Automotive Collision Risk ; Advanced collision avoidance and driver handoff systems can benefit from the ability to accurately predict, in real time, the probability a vehicle will be involved in a collision within an intermediate horizon of 10 to 20 seconds. The rarity of collisions in realworld data poses a significant challenge to developing this capability because, as we demonstrate empirically, intermediatehorizon risk prediction depends heavily on highdimensional driver behavioral features. As a result, a large amount of data is required to fit an effective predictive model. In this paper, we assess whether simulated data can help alleviate this issue. Focusing on highway driving, we present a threestep approach for generating data and fitting a predictive model capable of realtime prediction. First, highrisk automotive scenes are generated using importance sampling on a learned Bayesian network scene model. Second, collision risk is estimated through Monte Carlo simulation. Third, a neural network domain adaptation model is trained on real and simulated data to address discrepancies between the two domains. Experiments indicate that simulated data can mitigate issues resulting from collision rarity, thereby improving risk prediction in realworld data.
Gravitational waves from first order electroweak phase transition in models with the U1X gauge symmetry ; We consider a standard model extension equipped with a dark sector where the U1X Abelian gauge symmetry is spontaneously broken by the dark Higgs mechanism. In this framework, we investigate patterns of the electroweak phase transition as well as those of the dark phase transition, and examine detectability of gravitational waves GWs generated by such strongly first order phase transition. It is pointed out that the collider bounds on the properties of the discovered Higgs boson exclude a part of parameter space that could otherwise generate detectable GWs. After imposing various constraints on this model, it is shown that GWs produced by multistep phase transitions are detectable at future spacebased interferometers, such as LISA and DECIGO, if the dark photon is heavier than 25 GeV. Furthermore, we discuss the complementarity of dark photon searches or dark matter searches with the GW observations in these models with the dark gauge symmetry.
Learning via social awareness Improving a deep generative sketching model with facial feedback ; In the quest towards general artificial intelligence AI, researchers have explored developing loss functions that act as intrinsic motivators in the absence of external rewards. This paper argues that such research has overlooked an important and useful intrinsic motivator social interaction. We posit that making an AI agent aware of implicit social feedback from humans can allow for faster learning of more generalizable and useful representations, and could potentially impact AI safety. We collect social feedback in the form of facial expression reactions to samples from Sketch RNN, an LSTMbased variational autoencoder VAE designed to produce sketch drawings. We use a Latent Constraints GAN LCGAN to learn from the facial feedback of a small group of viewers, by optimizing the model to produce sketches that it predicts will lead to more positive facial expressions. We show in multiple independent evaluations that the model trained with facial feedback produced sketches that are more highly rated, and induce significantly more positive facial expressions. Thus, we establish that implicit social feedback can improve the output of a deep learning model.
Teaching Machines to Code Neural Markup Generation with Visual Attention ; We present a neural transducer model with visual attention that learns to generate LaTeX markup of a realworld math formula given its image. Applying sequence modeling and transduction techniques that have been very successful across modalities such as natural language, image, handwriting, speech and audio; we construct an imagetomarkup model that learns to produce syntactically and semantically correct LaTeX markup code over 150 words long and achieves a BLEU score of 89; improving upon the previous stateofart for the Im2Latex problem. We also demonstrate with heatmap visualization how attention helps in interpreting the model and can pinpoint detect and localize symbols on the image accurately despite having been trained without any bounding box data.
On Estimating MultiAttribute Choice Preferences using Private Signals and Matrix Factorization ; Revealed preference theory studies the possibility of modeling an agent's revealed preferences and the construction of a consistent utility function. However, modeling agent's choices over preference orderings is not always practical and demands strong assumptions on human rationality and dataacquisition abilities. Therefore, we propose a simple generative choice model where agents are assumed to generate the choice probabilities based on latent factor matrices that capture their choice evaluation across multiple attributes. Since the multiattribute evaluation is typically hidden within the agent's psyche, we consider a signaling mechanism where agents are provided with choice information through private signals, so that the agent's choices provide more insight about hisher latent evaluation across multiple attributes. We estimate the choice model via a novel multistage matrix factorization algorithm that minimizes the average deviation of the factor estimates from choice data. Simulation results are presented to validate the estimation performance of our proposed algorithm.
Phenomenology of the Higgs sector of a Dimension7 Neutrino Mass Generation Mechanism ; In this paper, we revisit the dimension7 neutrino mass generation mechanism based on the addition of an isospin 32 scalar quadruplet and two vectorlike isotriplet leptons to the standard model. We discuss the LHC phenomenology of the charged scalars of this model, complemented by the electroweak precision and lepton flavor violation constraints. We pay particular attention to the triply charged and doubly charged components. We focus on the samesigntrilepton signatures originating from the triplycharged scalars and find a discovery reach of 600 950 GeV at 3 ab1 of integrated luminosity at the LHC. On the other hand, doubly charged Higgs has been an object of collider searches for a long time, and we show how the present bounds on its mass depend on the particle spectrum of the theory. Strong constraint on the model parameter space can arise from the measured decay rate of the Standard Model Higgs to a pair of photons as well.
On the Dynamics of NearExtremal Black Holes ; We analyse the dynamics of nearextremal ReissnerNordstrom black holes in asymptotically fourdimensional Antide Sitter space AdS4. We work in the spherically symmetric approximation and study the thermodynamics and the response to a probe scalar field. We find that the behaviour of the system, at low energies and to leading order in our approximations, is well described by the JackiwTeitelboim JT model of gravity. In fact, this behaviour can be understood from symmetry considerations and arises due to the breaking of time reparametrisation invariance. The JT model has been analysed in considerable detail recently and related to the behaviour of the SYK model. Our results indicate that features in these models which arise from symmetry considerations alone are more general and present quite universally in nearextremal black holes.
Understanding and Enhancing the Transferability of Adversarial Examples ; Stateoftheart deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can textittransfer across models adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any query, which severely hinder the application of deep learning, especially in the areas where security is crucial. In this work, we systematically study how two classes of factors that might influence the transferability of adversarial examples. One is about modelspecific factors, including network architecture, model capacity and test accuracy. The other is the local smoothness of loss function for constructing adversarial examples. Based on these understanding, a simple but effective strategy is proposed to enhance transferability. We call it variancereduced attack, since it utilizes the variancereduced gradient to generate adversarial example. The effectiveness is confirmed by a variety of experiments on both CIFAR10 and ImageNet datasets.
Neural Aesthetic Image Reviewer ; Recently, there is a rising interest in perceiving image aesthetics. The existing works deal with image aesthetics as a classification or regression problem. To extend the cognition from rating to reasoning, a deeper understanding of aesthetics should be based on revealing why a high or lowaesthetic score should be assigned to an image. From such a point of view, we propose a model referred to as Neural Aesthetic Image Reviewer, which can not only give an aesthetic score for an image, but also generate a textual description explaining why the image leads to a plausible rating score. Specifically, we propose two multitask architectures based on shared aesthetically semantic layers and taskspecific embedding layers at a high level for performance improvement on different tasks. To facilitate researches on this problem, we collect the AVAReviews dataset, which contains 52,118 images and 312,708 comments in total. Through multitask learning, the proposed models can rate aesthetic images as well as produce comments in an endtoend manner. It is confirmed that the proposed models outperform the baselines according to the performance evaluation on the AVAReviews dataset. Moreover, we demonstrate experimentally that our model can generate textual reviews related to aesthetics, which are consistent with human perception.
On the validity of cosmic nohair conjecture in an anisotropic inflationary model ; We will present main results of our recent investigations on the validity of cosmic nohair conjecture proposed by Hawking and his colleagues long time ago in the framework of an anisotropic inflationary model proposed by Kanno, Soda, and Watanabe. As a result, we will show that the cosmic nohair conjecture seems to be generally violated in the KannoSodaWatanabe model for both canonical and noncanonical scalar fields due to the existence of a nontrivial coupling term between scalar and electromagnetic fields. However, we will also show that the validity of the cosmic nohair conjecture will be ensured once a unusual scalar field called the phantom field, whose kinetic energy term is negative definite, is introduced into the KannoSodaWatanabe model.
Explaining Away Syntactic Structure in Semantic Document Representations ; Most generative document models act on bagofwords input in an attempt to focus on the semantic content and thereby partially forego syntactic information. We argue that it is preferable to keep the original word order intact and explicitly account for the syntactic structure instead. We propose an extension to the Neural Variational Document Model Miao et al., 2016 that does exactly that to separate local syntactic context from the global semantic representation of the document. Our model builds on the variational autoencoder framework to define a generative document model based on nextword prediction. We name our approach SequenceAware Variational Autoencoder since in contrast to its predecessor, it operates on the true input sequence. In a series of experiments we observe stronger topicality of the learned representations as well as increased robustness to syntactic noise in our training data.
The Holographic SpaceTime Model of Cosmology ; This essay outlines the Holographic Spacetime HST theory of cosmology and its relation to conventional theories of inflation. The predictions of the theory are compatible with observations, and one must hope for data on primordial gravitational waves or nonGaussian fluctuations to distinguish it from conventional models. The model predicts an early era of structure formation, prior to the Big Bang. Understanding the fate of those structures requires complicated simulations that have not yet been done. The result of those calculations might falsify the model, or might provide a very economical framework for explaining dark matter and the generation of the baryon asymmetry.
A quantized physical framework for understanding the working mechanism of ion channels ; A quantized physical framework, called the fiveanchor model, is developed for a general understanding of the working mechanism of ion channels. According to the hypotheses of this model, the following two basic physical principles are assigned to each anchor the polarity change induced by an electron transition and the mutual repulsion and attraction induced by an electrostatic force. Consequently, many unique phenomena, such as fast and slow inactivation, the stochastic gating pattern and constant conductance of a single ion channel, the difference between electrical and optical stimulation optogenetics, nerve conduction block and the generation of an action potential, become intrinsic features of this physical model. Moreover, this model also provides a foundation for the probability equation used to calculate the results of electrical stimulation in our previous CP theory.
The Information Autoencoding Family A Lagrangian Perspective on Latent Variable Generative Models ; A large number of objectives have been proposed to train latent variable generative models. We show that many of them are Lagrangian dual functions of the same primal optimization problem. The primal problem optimizes the mutual information between latent and visible variables, subject to the constraints of accurately modeling the data distribution and performing correct amortized inference. By choosing to maximize or minimize mutual information, and choosing different Lagrange multipliers, we obtain different objectives including InfoGAN, ALIBiGAN, ALICE, CycleGAN, betaVAE, adversarial autoencoders, AVB, ASVAE and InfoVAE. Based on this observation, we provide an exhaustive characterization of the statistical and computational tradeoffs made by all the training objectives in this class of Lagrangian duals. Next, we propose a dual optimization method where we optimize model parameters as well as the Lagrange multipliers. This method achieves Pareto optimal solutions in terms of optimizing information and satisfying the constraints.
An Integrated Framework for Process Discovery Algorithm Evaluation ; Process mining offers techniques to exploit event data by providing insights and recommendations to improve business processes. The growing amount of algorithms for process discovery has raised the question of which algorithms perform best on a given event log. Current evaluation frameworks for empirically evaluating discovery techniques depend on the notation used behavioral identical models may give different results and cannot provide more general statements about populations of models. Therefore, this paper proposes a new integrated evaluation framework that uses a classification approach to make it modeling notation independent. Furthermore, it is founded on experimental design to ensure the generalization of results. It supports two main evaluation objectives benchmarking process discovery algorithms and sensitivity analysis, i.e. studying the effect of model and log characteristics on a discovery algorithm's accuracy. The framework is designed as a scientific workflow which enables automated, extendable and shareable evaluation experiments. An extensive experiment including four discovery algorithms and six controlflow characteristics validates the relevance and flexibility of the framework. Ultimately, the paper aims to advance the stateoftheart for evaluating process discovery techniques.
A Note on Tsallis Holographic Dark Energy ; We explore the effects of considering various infrared IR cutoffs, including the particle horizon, Ricci horizon and GrandaOliveros GO cutoffs, on the properties of Tsallis holographic dark energy THDE model, proposed inspired by Tsallis generalized entropy formalism citeTHDE. Interestingly enough, we find that for the particle horizon as IR cutoff, the obtained THDE model can describe the accelerated universe. This is in contrast to the usual HDE model which cannot lead to an accelerated universe, if one consider the particle horizon as IR cutoff. We also investigate the cosmological consequences of THDE under the assumption of a mutual interaction between the dark sectors of the Universe. It is shown that the evolution history of the Universe can be described by these IR cutoffs and thus the current cosmic acceleration can also been realized. The sound instability of THDE models for each cutoff are also explored, separately.
Semantic Compression of Episodic Memories ; Storing knowledge of an agent's environment in the form of a probabilistic generative model has been established as a crucial ingredient in a multitude of cognitive tasks. Perception has been formalised as probabilistic inference over the state of latent variables, whereas in decision making the model of the environment is used to predict likely consequences of actions. Such generative models have earlier been proposed to underlie semantic memory but it remained unclear if this model also underlies the efficient storage of experiences in episodic memory. We formalise the compression of episodes in the normative framework of information theory and argue that semantic memory provides the distortion function for compression of experiences. Recent advances and insights from machine learning allow us to approximate semantic compression in naturalistic domains and contrast the resulting deviations in compressed episodes with memory errors observed in the experimental literature on human memory.
Configuration model for correlation matrices preserving the node strength ; Correlation matrices are a major type of multivariate data. To examine properties of a given correlation matrix, a common practice is to compare the same quantity between the original correlation matrix and reference correlation matrices, such as those derived from random matrix theory, that partially preserve properties of the original matrix. We propose a model to generate such reference correlation and covariance matrices for the given matrix. Correlation matrices are often analysed as networks, which are heterogeneous across nodes in terms of the total connectivity to other nodes for each node. Given this background, the present algorithm generates random networks that preserve the expectation of total connectivity of each node to other nodes, akin to configuration models for conventional networks. Our algorithm is derived from the maximum entropy principle. We will apply the proposed algorithm to measurement of clustering coefficients and community detection, both of which require a null model to assess the statistical significance of the obtained results.
A Recursive PLS Partial Least Squares based Approach for Enterprise Threat Management ; Most of the existing solutions to enterprise threat management are preventive approaches prescribing means to prevent policy violations with varying degrees of success. In this paper we consider the complementary scenario where a number of security violations have already occurred, or security threats, or vulnerabilities have been reported and a security administrator needs to generate optimal response to these security events. We present a principled approach to study and model the human expertise in responding to the emergent threats owing to these security events. A recursive Partial Least Squares based adaptive learning model is defined using a factorial analysis of the security events together with a method for estimating the effect of global context dependent semantic information used by the security administrators. Presented model is theoretically optimal and operationally recursive in nature to deal with the set of security events being generated continuously. We discuss the underlying challenges and ways in which the model could be operationalized in centralized versus decentralized, and realtime versus batch processing modes.
Enhancing Sentence Embedding with Generalized Pooling ; Pooling is an essential component of a wide variety of sentence representation and embedding models. This paper explores generalized pooling methods to enhance sentence embedding. We propose vectorbased multihead attention that includes the widely used max pooling, mean pooling, and scalar selfattention as special cases. The model benefits from properly designed penalization terms to reduce redundancy in multihead attention. We evaluate the proposed model on three different tasks natural language inference NLI, author profiling, and sentiment classification. The experiments show that the proposed model achieves significant improvement over strong sentenceencodingbased methods, resulting in stateoftheart performances on four datasets. The proposed approach can be easily implemented for more problems than we discuss in this paper.
Mixture Matrix Completion ; Completing a data matrix X has become an ubiquitous problem in modern data science, with applications in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is lowrank. A more general model assumes that each column of X corresponds to one of several lowrank matrices. This paper generalizes these models to what we call mixture matrix completion MMC the case where each entry of X corresponds to one of several lowrank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible wellposed. Second, we give its precise informationtheoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the stateoftheart for simpler related problems, both on synthetic and real data.
Universal Perceptual Grouping ; In this work we aim to develop a universal sketch grouper. That is, a grouper that can be applied to sketches of any category in any domain to group constituent strokessegments into semantically meaningful object parts. The first obstacle to this goal is the lack of largescale datasets with grouping annotation. To overcome this, we contribute the largest sketch perceptual grouping SPG dataset to date, consisting of 20,000 unique sketches evenly distributed over 25 object categories. Furthermore, we propose a novel deep universal perceptual grouping model. The model is learned with both generative and discriminative losses. The generative losses improve the generalisation ability of the model to unseen object categories and datasets. The discriminative losses include a local grouping loss and a novel global grouping loss to enforce global grouping consistency. We show that the proposed model significantly outperforms the stateoftheart groupers. Further, we show that our grouper is useful for a number of sketch analysis tasks including sketch synthesis and finegrained sketchbased image retrieval FGSBIR.
Counterfactual Normalization Proactively Addressing Dataset Shift and Improving Reliability Using Causal Mechanisms ; Predictive models can fail to generalize from training to deployment environments because of dataset shift, posing a threat to model reliability and the safety of downstream decisions made in practice. Instead of using samples from the target distribution to reactively correct dataset shift, we use graphical knowledge of the causal mechanisms relating variables in a prediction problem to proactively remove relationships that do not generalize across environments, even when these relationships may depend on unobserved variables violations of the no unobserved confounders assumption. To accomplish this, we identify variables with unstable paths of statistical influence and remove them from the model. We also augment the causal graph with latent counterfactual variables that isolate unstable paths of statistical influence, allowing us to retain stable paths that would otherwise be removed. Our experiments demonstrate that models that remove vulnerable variables and use estimates of the latent variables transfer better, often outperforming in the target domain despite some accuracy loss in the training domain.
Using Regular Languages to Explore the Representational Capacity of Recurrent Neural Architectures ; The presence of Long Distance Dependencies LDDs in sequential data poses significant challenges for computational models. Various recurrent neural architectures have been designed to mitigate this issue. In order to test these stateoftheart architectures, there is growing need for rich benchmarking datasets. However, one of the drawbacks of existing datasets is the lack of experimental control with regards to the presence andor degree of LDDs. This lack of control limits the analysis of model performance in relation to the specific challenge posed by LDDs. One way to address this is to use synthetic data having the properties of subregular languages. The degree of LDDs within the generated data can be controlled through the k parameter, length of the generated strings, and by choosing appropriate forbidden strings. In this paper, we explore the capacity of different RNN extensions to model LDDs, by evaluating these models on a sequence of SPk synthesized datasets, where each subsequent dataset exhibits a longer degree of LDD. Even though SPk are simple languages, the presence of LDDs does have significant impact on the performance of recurrent neural architectures, thus making them prime candidate in benchmarking tasks.
Measuring Semantic Abstraction of Multilingual NMT with Paraphrase Recognition and Generation Tasks ; In this paper, we investigate whether multilingual neural translation models learn stronger semantic abstractions of sentences than bilingual ones. We test this hypotheses by measuring the perplexity of such models when applied to paraphrases of the source language. The intuition is that an encoder produces better representations if a decoder is capable of recognizing synonymous sentences in the same language even though the model is never trained for that task. In our setup, we add 16 different auxiliary languages to a bidirectional bilingual baseline model EnglishFrench and test it with indomain and outofdomain paraphrases in English. The results show that the perplexity is significantly reduced in each of the cases, indicating that meaning can be grounded in translation. This is further supported by a study on paraphrase generation that we also include at the end of the paper.
Robust Text Classifier on TestTime Budgets ; We propose a generic and interpretable learning framework for building robust text classification model that achieves accuracy comparable to full models under testtime budget constraints. Our approach learns a selector to identify words that are relevant to the prediction tasks and passes them to the classifier for processing. The selector is trained jointly with the classifier and directly learns to incorporate with the classifier. We further propose a data aggregation scheme to improve the robustness of the classifier. Our learning framework is general and can be incorporated with any type of text classification model. On realworld data, we show that the proposed approach improves the performance of a given classifier and speeds up the model with a mere loss in accuracy performance.
Modelling Irregular Spatial Patterns using Graph Convolutional Neural Networks ; The understanding of geographical reality is a process of data representation and pattern discovery. Former studies mainly adopted continuousfield models to represent spatial variables and to investigate the underlying spatial continuityheterogeneity in the regular spatial domain. In this article, we introduce a more generalized model based on graph convolutional neural networks GCNs that can capture the complex parameters of spatial patterns underlying graphstructured spatial data, which generally contain both Euclidean spatial information and nonEuclidean feature information. A trainable semisupervised prediction framework is proposed to model the spatial distribution patterns of intraurban points of interestPOI checkins. This work demonstrates the feasibility of GCNs in complex geographic decision problems and provides a promising tool to analyze irregular spatial data.
A Variational Feature Encoding Method of 3D Object for Probabilistic Semantic SLAM ; This paper presents a feature encoding method of complex 3D objects for highlevel semantic features. Recent approaches to object recognition methods become important for semantic simultaneous localization and mapping SLAM. However, there is a lack of consideration of the probabilistic observation model for 3D objects, as the shape of a 3D object basically follows a complex probability distribution. Furthermore, since the mobile robot equipped with a range sensor observes only a single view, much information of the object shape is discarded. These limitations are the major obstacles to semantic SLAM and viewindependent loop closure using 3D object shapes as features. In order to enable the numerical analysis for the Bayesian inference, we approximate the true observation model of 3D objects to tractable distributions. Since the observation likelihood can be obtained from the generative model, we formulate the true generative model for 3D object with the Bayesian networks. To capture these complex distributions, we apply a variational autoencoder. To analyze the approximated distributions and encoded features, we perform classification with maximum likelihood estimation and shape retrieval.
MultiHop Knowledge Graph Reasoning with Reward Shaping ; Multihop reasoning is an effective approach for query answering QA over incomplete knowledge graphs KGs. The problem can be formulated in a reinforcement learning RL setup, where a policybased agent sequentially extends its inference path until it reaches a target. However, in an incomplete KG environment, the agent receives lowquality rewards corrupted by false negatives in the training data, which harms generalization at test time. Furthermore, since no golden action sequence is used for training, the agent can be misled by spurious search trajectories that incidentally lead to the correct answer. We propose two modeling advances to address both issues 1 we reduce the impact of false negative supervision by adopting a pretrained onehop embedding model to estimate the reward of unobserved facts; 2 we counter the sensitivity to spurious paths of onpolicy RL by forcing the agent to explore a diverse set of paths using randomly generated edge masks. Our approach significantly improves over existing pathbased KGQA models on several benchmark datasets and is comparable or better than embeddingbased models.
Modeling overland flow from local inflows in almost notime, using Self Organizing Maps ; Physicallybased overland flow models are computationally demanding, hindering their use for realtime applications. Therefore, the development of fast and reasonably accurate overland flow models is needed if they are to be used to support flood mitigation decision making. In this study, we investigate the potential of SelfOrganizing Maps to rapidly generate water depth and flood extent results. To conduct the study, we developed a floodsimulation specific SOM, using cellular automata flood model results and a synthetic DEM and inflow hydrograph. The preliminary results showed that water depth and flood extent results produced by the SOM are reasonably accurate and obtained in a very short period of time. Based on this, it seems that SOMs have the potential to provide critical flood information to support realtime flood mitigation decisions. The findings presented would however require further investigations to obtain general conclusions; these further investigations may include the consideration of real terrain representations, real water supply networks and realistic inflows from pipe bursts.
Integrable spin12 RichardsonGaudin XYZ models in an arbitrary magnetic field ; We establish the most general class of spin12 integrable RichardsonGaudin models including an arbitrary magnetic field, returning a fully anisotropic XYZ model. The restriction to spin12 relaxes the usual integrability constraints, allowing for a general solution where the couplings between spins lack the usual antisymmetric properties of RichardsonGaudin models. The full set of conserved charges are constructed explicitly and shown to satisfy a set of quadratic equations, allowing for the numerical treatment of a fully anisotropic central spin in an external magnetic field. While this approach does not provide expressions for the exact eigenstates, it allows their eigenvalues to be obtained, and expectation values of local observables can then be calculated from the HellmannFeynman theorem.
Phenomenology using PARTONS software of Generalized Parton Distribution models built from LightFront Wavefunctions ; We present a procedure aiming at extending to the full kinematic domain a Generalized Parton Distribution obtained from a finite order truncation in Fock space. This method allows to fulfill both polynomiality and positivity at the same time and can be applied to any given models of Lightfront wavefunctions. In particular, we illustrate this on a threebody truncated wavefunction of the chiral quark soliton model and show how a systematic phenomenology of GPD models based on LFWFs can be achieved with the help of the PARTONS framework, here using DVCS data. This paves the way for a unified phenomenology of GPDs and TMDs at the level of LFWFs, with the final goal of hadron tomography.
Modeling of nonlinear audio effects with endtoend deep neural networks ; In the context of music production, distortion effects are mainly used for aesthetic reasons and are usually applied to electric musical instruments. Most existing methods for nonlinear modeling are often either simplified or optimized to a very specific circuit. In this work, we investigate deep learning architectures for audio processing and we aim to find a general purpose endtoend deep neural network to perform modeling of nonlinear audio effects. We show the network modeling various nonlinearities and we discuss the generalization capabilities among different instruments.
Plane Partition Realization of Web of Walgebra Minimal Models ; Recently, Gaiotto and Rapcak GR proposed a new family of the vertex operator algebra VOA as the symmetry appearing at an intersection of fivebranes to which they refer as Y algebra. Prochazka and Rapcak, then proposed to interpret Y algebra as a truncation of affine Yangian whose module is directly connected to plane partitions PP. They also developed GR's idea to generate a new VOA by connecting plane partitions through an infinite leg shared by them and referred it as the web of Walgebra WoW. In this paper, we demonstrate that double truncation of PP gives the minimal models of such VOAs. For a single PP, it generates all the minimal model irreducible representations of Walgebra. We find that the rule connecting two PPs is more involved than those in the literature when the U1 charge connecting two PPs is negative. For the simplest nontrivial WoW, N2 superconformal algebra, we demonstrate that the improved rule precisely reproduces the known character of the minimal models.
Optimal stochastic modelling with unitary quantum dynamics ; Identifying and extracting the past information relevant to the future behaviour of stochastic processes is a central task in the quantitative sciences. Quantum models offer a promising approach to this, allowing for accurate simulation of future trajectories whilst using less past information than any classical counterpart. Here we introduce a class of phaseenhanced quantum models, representing the most general means of causal simulation with a unitary quantum circuit. We show that the resulting constructions can display advantages over previous stateofart methods both in the amount of information they need to store about the past, and in the minimal memory dimension they require to store this information. Moreover, we find that these two features are generally competing factors in optimisation leading to an ambiguity in what constitutes the optimal model a phenomenon that does not manifest classically. Our results thus simultaneously offer new quantum advantages for stochastic simulation, and illustrate further qualitative differences in behaviour between classical and quantum notions of complexity.
Term structure modeling for multiple curves with stochastic discontinuities ; We develop a general term structure framework taking stochastic discontinuities explicitly into account. Stochastic discontinuities are a key feature in interest rate markets, as for example the jumps of the term structures in correspondence to monetary policy meetings of the ECB show. We provide a general analysis of multiple curve markets under minimal assumptions in an extended HJM framework and provide a fundamental theorem of asset pricing based on NAFLVR. The approach with stochastic discontinuities permits to embed market models directly, unifying seemingly different modeling philosophies. We also develop a tractable class of models, based on affine semimartingales, going beyond the requirement of stochastic continuity.
A WholeBody Model Predictive Control Scheme Including External Contact Forces and CoM Height Variations ; In this paper, we present an approach for generating a variety of wholebody motions for a humanoid robot. We extend the available Model Predictive Control MPC approaches for walking on flat terrain to plan for both vertical motion of the Center of Mass CoM and external contact forces consistent with a given task. The optimization problem is comprised of three stages, i. e. the CoM vertical motion, joint angles, and contact forces planning. The choice of external contact e. g. hand contact with the object or environment among all available locations and the appropriate time to reach and maintain a contact are all computed automatically within the algorithm. The presented algorithm benefits from the simplicity of the Linear Inverted Pendulum Model LIPM, while it overcomes the common limitations of this model and enables us to generate a variety of wholebody motions through external contacts. Simulation and experimental implementation of several wholebody actions in multicontact scenarios on a humanoid robot show the capability of the proposed algorithm.
Efficient Wave Optics Modeling of Nanowire Solar Cells Using Rigorous Coupled Wave Analysis ; We investigate the accuracy of rigorous coupled wave analysis RCWA for nearfield computations within cylindrical GaAs nanowire solar cells and discover excellent accuracy with low computational cost at long incident wavelengths, but poor accuracy at short incident wavelengths. These near fields give the carrier generation rate, and their accurate determination is essential for device modeling. We implement two techniques for increasing the accuracy of the near fields generated by RCWA, and give some guidance on parameters required for convergence along with an estimate of their associated computation times. The first improvement removes Gibbs phenomenon artifacts from the RCWA fields, and the second uses the extremely wellconverged far field absorption to rescale the local fields. These improvements allow a computational speedup between 30 and 1000 times for spectrally integrated calculations, depending on the density of the near fields desired. Some spectrally resolved quantities, especially at short wavelengths, remain expensive, but RCWA is still an excellent method for performing those calculations. These improvements open up the possibility of using RCWA for low cost optical modeling in a full optoelectronic device model of nanowire solar cells.
Generalization in anticausal learning ; The ability to learn and act in novel situations is still a prerogative of animate intelligence, as current machine learning methods mostly fail when moving beyond the standard i.i.d. setting. What is the reason for this discrepancy Most machine learning tasks are anticausal, i.e., we infer causes labels from effects observations. Typically, in supervised learning we build systems that try to directly invert causal mechanisms. Instead, in this paper we argue that strong generalization capabilities crucially hinge on searching and validating meaningful hypotheses, requiring access to a causal model. In such a framework, we want to find a cause that leads to the observed effect. Anticausal models are used to drive this search, but a causal model is required for validation. We investigate the fundamental differences between causal and anticausal tasks, discuss implications for topics ranging from adversarial attacks to disentangling factors of variation, and provide extensive evidence from the literature to substantiate our view. We advocate for incorporating causal models in supervised learning to shift the paradigm from inference only, to search and validation.
TeacherStudent Compression with Generative Adversarial Networks ; More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU or memoryconstrained devices. Teacherstudent compression TSC, also known as distillation, alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. However, when fresh data is unavailable for the compression task, the teacher's training data is typically reused, leading to suboptimal compression. In this work, we propose to augment the compression dataset with synthetic data from a generative adversarial network GAN designed to approximate the training data distribution. Our GANassisted TSC GANTSC significantly improves student accuracy for expensive models such as large random forests and deep neural networks on both tabular and image datasets. Building on these results, we propose a comprehensive metricthe TSC Scoreto evaluate the quality of synthetic datasets based on their induced TSC performance. The TSC Score captures both data diversity and class affinity, and we illustrate its benefits over the popular Inception Score in the context of image classification.
APPLIED COSMOGRAPHY A Pedagogical Review ; Based on the cosmological principle only, the method of describing the evolution of the Universe, called cosmography, is in fact a kinematics of cosmological expansion. The effectiveness of cosmography lies in the fact that it allows, based on the results of observations, to perform a rigid selection of models that do not contradict the cosmological principle. It is important that the introduction of new components dark matter, dark energy or even more mysterious entities will not affect the relationship between the kinematic characteristics cosmographic parameters This paper shows that within the framework of cosmography the parameters of any model that satisfies the cosmological principle the universe is homogeneous and isotropic on large scale, can be expressed through cosmographic parameters. The proposed approach to finding the parameters of cosmological models has many advantages. Emphasize that all the obtained results are accurate, since they follow from identical transformations. The procedure can be generalized to the case of models with interaction between components.
Efficient transfer learning and online adaptation with latent variable models for continuous control ; Traditional modelbased RL relies on handspecified or learned models of transition dynamics of the environment. These methods are sample efficient and facilitate learning in the real world but fail to generalize to subtle variations in the underlying dynamics, e.g., due to differences in mass, friction, or actuators across robotic agents or across time. We propose using variational inference to learn an explicit latent representation of unknown environment properties that accelerates learning and facilitates generalization on novel environments at test time. We use Online Bayesian Inference of these learned latents to rapidly adapt online to changes in environments without retaining large replay buffers of recent data. Combined with a neural network ensemble that models dynamics and captures uncertainty over dynamics, our approach demonstrates positive transfer during training and online adaptation on the continuous control task HalfCheetah.
Arctic Boundaries of the Ice Model on ThreeBundle Domains ; In this paper we consider the sixvertex model at ice point on an arbitrary threebundle domain, which is a generalization of the domainwall ice model on the square or, equivalently, of a uniformly random alternating sign matrix. We show that this model exhibits the arctic boundary phenomenon, whose boundary is given by a union of explicit algebraic curves. This was originally predicted by ColomoSportiello in 2016 as one of the initial applications of a general heuristic that they introduced for locating arctic boundaries, called the geometric tangent method. Our proof uses a probabilistic analysis of noncrossing directed path ensembles to provide a mathematical justification of their tangent method heuristic in this case, which might be of independent interest.
A model for superconducting sulfur hydrides ; Due to the value of the isotope shift in sulfur hydrides, a phononmediated pairing scenario of superconductivity is generally accepted for these hightemperature superconductors which is consistent with the BardeenCooperSchrieffer BCS framework. Knowing that a large electronic density of states enhances Tc, generalized Fermi surface topologies are used to increase it. A multiband model within the BCS framework is proposed in this work for the description of sulfur hydride superconductors. This model is used to describe some properties of the H3S superconductor. Strong coupling effects are taken into account with the effective McMillan approximation and the isotope coefficient is evaluated as function of the coupling parameter as well as other relevant parameters of the model.
Synergy Effect between Convolutional Neural Networks and the Multiplicity of SMILES for Improvement of Molecular Prediction ; In our study, we demonstrate the synergy effect between convolutional neural networks and the multiplicity of SMILES. The model we propose, the socalled Convolutional Neural Fingerprint CNF model, reaches the accuracy of traditional descriptors such as Dragon Mauri et al. 22, RDKit Landrum 18, CDK2 Willighagen et al. 43 and PyDescriptor Masand and Rastija 20. Moreover the CNF model generally performs better than highly finetuned traditional descriptors, especially on small data sets, which is of great interest for the chemical field where data sets are generally small due to experimental costs, the availability of molecules or accessibility to private databases. We evaluate the CNF model along with SMILES augmentation during both training and testing. To the best of our knowledge, this is the first time that such a methodology is presented. We show that using the multiplicity of SMILES during training acts as a regulariser and therefore avoids overfitting and can be seen as ensemble learning when considered for testing.
Shortcut Matrix Product States and its applications ; Matrix Product States MPS, also known as Tensor Train TT decomposition in mathematics, has been proposed originally for describing an especially onedimensional quantum system, and recently has found applications in various applications such as compressing highdimensional data, supervised kernel linear classifier, and unsupervised generative modeling. However, when applied to systems which are not defined on onedimensional lattices, a serious drawback of the MPS is the exponential decay of the correlations, which limits its power in capturing longrange dependences among variables in the system. To alleviate this problem, we propose to introduce longrange interactions, which act as shortcuts, to MPS, resulting in a new model textit Shortcut Matrix Product States SMPS. When chosen properly, the shortcuts can decrease significantly the correlation length of the MPS, while preserving the computational efficiency. We develop efficient training methods of SMPS for various tasks, establish some of their mathematical properties, and show how to find a good location to add shortcuts. Finally, using extensive numerical experiments we evaluate its performance in a variety of applications, including function fitting, partition function calculation of 2d Ising model, and unsupervised generative modeling of handwritten digits, to illustrate its advantages over vanilla matrix product states.
A Parametric TopView Representation of Complex Road Scenes ; In this paper, we address the problem of inferring the layout of complex road scenes given a single camera as input. To achieve that, we first propose a novel parameterized model of road layouts in a topview representation, which is not only intuitive for human visualization but also provides an interpretable interface for higherlevel decision making. Moreover, the design of our topview scene model allows for efficient sampling and thus generation of largescale simulated data, which we leverage to train a deep neural network to infer our scene model's parameters. Specifically, our proposed training procedure uses supervised domainadaptation techniques to incorporate both simulated as well as manually annotated data. Finally, we design a Conditional Random Field CRF that enforces coherent predictions for a single frame and encourages temporal smoothness among video frames. Experiments on two public data sets show that 1 Our parametric topview model is representative enough to describe complex road scenes; 2 The proposed method outperforms baselines trained on manuallyannotated or simulated data only, thus getting the best of both; 3 Our CRF is able to generate temporally smoothed while semantically meaningful results.
Conformal Group Actions on Generalized Kuramoto Oscillators ; This paper unifies the recent results on generalized Kuramoto Model reductions. Lohe took a coupled system of N bodies on Sd governed by the Kuramoto equations dotxi Omega xi X langle xi, X rangle xi and used the method of Watanabe and Strogatz to reduce this system to d fracdd12 equations. Using a model of rigid rotations on a sphere as a guide, we show that the reduction is described by a smooth path in the Lie group of conformal transformations on the sphere, which is diffeomorphic to SOd times Dd. Seeing the reduction this way allows us to apply geometric and topological reasoning in order to understand qualitative behavior of the Kuramoto Model.
Heavy neutral leptons and highintensity observables ; New Physics models in which the Standard Model particle content is enlarged via the addition of sterile fermions remain among the most minimal and yet most appealing constructions, particularly since these states are present as building blocks of numerous mechanisms of neutrino mass generation. Should the new sterile states have nonnegligible mixings to the active light neutrinos, and if they are not excessively heavy, one expects important contributions to numerous highintensity observables, among them charged lepton flavour violating muon decays and transitions, and lepton electric dipole moments. We briefly review the prospects of these minimal SM extensions to several of the latter observables, considering both simple extensions and complete models of neutrino mass generation. We emphasise the existing synergy between different observables at the Intensity Frontier, which will be crucial in unveiling the new model at work.
Random Utility and Limited Consideration ; The random utility model RUM, McFadden and Richter, 1990 has been the standard tool to describe the behavior of a population of decision makers. RUM assumes that decision makers behave as if they maximize a rational preference over a choice set. This assumption may fail when consideration of all alternatives is costly. We provide a theoretical and statistical framework that unifies wellknown models of random limited consideration and generalizes them to allow for preference heterogeneity. We apply this methodology in a novel stochastic choice dataset that we collected in a largescale online experiment. Our dataset is unique since it exhibits both choice set and attention frame variation. We run a statistical survival race between competing models of random consideration and RUM. We find that RUM cannot explain the population behavior. In contrast, we cannot reject the hypothesis that decision makers behave according to the logit attention model Brade and Rehbeck, 2016.
Branching interlacements and treeindexed random walks in tori ; We introduce a model of branching interlacements for general critical offspring distributions. It consists of a countable collection of infinite treeindexed random walk trajectories on Zd,dgeq5. We show that this model turns out to be the local limit of the treeindexed random walk in a discrete torus, conditioned on the size proportional to the volume of the torus. This generalizes the previous results of Angel, R'ath and the author, for the critical geometric offspring distribution. Our model also includes the model of random interlacements introduced by Sznitman as a degenerate case. To obtain the local convergence, we establish results on decomposing large random trees into small trees, local limits of random trees around a prefixed vertex, and asymptotics of the visiting probability of a set by a treeindexed random walk with a given size in a torus. These auxiliary results are interesting in themselves. As another application, we show that when dgeq5 the cover time of a ddimensional torus of sidelength N by treeindexed random walks is concentrated at Nd log NdmathrmBCap0.
Microscopic mechanism for higherspin Kitaev model ; The spin Sfrac12 Kitaev honeycomb model has attracted significant attention, since emerging candidate materials have provided a playground to test nonAbelian anyons. The Kitaev model with higher spins has also been theoretically studied, as it may offer another path to a quantum spin liquid. However, a microscopic route to achieve higher spin Kitaev models in solid state materials has not been rigorously derived. Here we present a theory of the spin S1 Kitaev interaction in twodimensional edgeshared octahedral systems. Essential ingredients are strong spinorbit coupling in anions and strong Hund's coupling in transition metal cations. The S1 Kitaev and ferromagnetic Heisenberg interactions are generated from superexchange paths. Taking into account the antiferromagnetic Heisenberg term from directexchange paths, the Kitaev interaction dominates the physics of S1 system. Using exact diagonalization technique, we show a finite regime of S1 spin liquid in the presence of the Heisenberg interaction. Candidate materials are proposed, and generalization to higher spins is discussed.
Compressed Convolutional LSTM An Efficient Deep Learning framework to Model High Fidelity 3D Turbulence ; Highfidelity modeling of turbulent flows is one of the major challenges in computational physics, with diverse applications in engineering, earth sciences and astrophysics, among many others. The rising popularity of highfidelity computational fluid dynamics CFD techniques like direct numerical simulation DNS and large eddy simulation LES have made significant inroads into the problem. However, they remain out of reach for many practical threedimensional flows characterized by extremely large domains and transient phenomena. Therefore designing efficient and accurate datadriven generative approaches to model turbulence is a necessity. We propose a novel training approach for dimensionality reduction and spatiotemporal modeling of the threedimensional dynamics of turbulence using a combination of Convolutional autoencoder and the Convolutional LSTM neural networks. The quality of the emulated turbulent fields is assessed with rigorous physicsbased statistical tests, instead of visual assessments. The results show significant promise in the training methodology to generate physically consistent turbulent flows at a small fraction of the computing resources required for DNS.
MultiHiggsDoublet Models and Singular Alignment ; We consider a 4Higgsdoublet model in which each Higgs doublet gives mass to one of the fermion sets mt, mb,mtau,mc , mmu,ms , and md,mu,me . The sets have the feature that within each of them the masses are similar. Our model explains the mass hierarchies of the sets by hierarchies of the vacuum expectation values of the Higgs doublets associated to them. All Yukawa couplings are therefore of order one. Neutrino masses are generated by a typeI seesaw mechanism with PeVscale singlet neutrinos. To avoid the appearance of treelevel flavor changing neutral currents, we assume that all Yukawa matrices are singularly aligned in flavor space. We mean by this that the Yukawa matrices are given as linear combinations of the rank 1 matrices that appear in the singular value decomposition of the mass matrix. In general, singular alignment allows to avoid flavor changing neutral currents in models with multiple Higgs doublets.
Interacting bosons in generalized zigzag and railroadtrestle models ; We theoretically study the groundstate phase diagram of strongly interacting bosons on a generalized zigzag ladder model, the railroad trestle RRT model. By means of analytical arguments in the limits of decoupled chains and the case of vanishing fillings as well as extensive DMRG calculations we examine the rich interplay between frustration and interaction for various parameter regimes. We distinguish three different cases, the fully frustrated RRT model where the dispersion relation becomes doubly degenerate and an extensive chiral superfluid regime is found, the antisymmetric RRT with alternating pi and 0 fluxes through the ladder plaquettes and the sawtooth limit, which is closely related to the latter case. We study detailed phase diagrams which include besides different single component superfluids, the chiral superfluid phases, the two component superfluids and different gaped phases, with dimer and a chargedensity wave order.
Mean Field Equilibrium Uniqueness, Existence, and Comparative Statics ; The standard solution concept for stochastic games is Markov perfect equilibrium MPE; however, its computation becomes intractable as the number of players increases. Instead, we consider mean field equilibrium MFE that has been popularized in the recent literature. MFE takes advantage of averaging effects in models with a large number of players. We make three main contributions. First, our main result provides conditions that ensure the uniqueness of an MFE. We believe this uniqueness result is the first of its nature in the class of models we study. Second, we generalize previous MFE existence results. Third, we provide general comparative statics results. We apply our results to dynamic oligopoly models and to heterogeneous agent macroeconomic models commonly used in previous work in economics and operations.
Matter Growth in Imperfect Fluid Cosmology ; Extensions of Einstein's General Relativity GR can formally be given a GR structure in which additional geometric degrees of freedom are mapped on an effective energymomentum tensor. The corresponding effective cosmic medium can then be modeled as an imperfect fluid within GR. The imperfect fluid structure allows us to include, on a phenomenological basis, anisotropic stresses and energy fluxes which are considered as potential signatures for deviations from the cosmological standard Lambdacolddarkmatter LambdaCDM model. As an example, we consider the dynamics of a scalartensor extension of the standard model, the ePhiLambdaCDM model. We constrain the magnitudes of anisotropic pressure and energy flux with the help of redshiftspace distortion RSD data for the matter growth function f sigma8.
The fast scalar auxiliary variable approach with unconditional energy stability for nonlocal CahnHilliard equation ; Comparing with the classical local gradient flow and phase field models, the nonlocal models such as nonlocal CahnHilliard equations equipped with nonlocal diffusion operator can describe more practical phenomena for modeling phase transitions. In this paper, we construct an accurate and efficient scalar auxiliary variable approach for the nonlocal CahnHilliard equation with general nonlinear potential. The first contribution is that we have proved the unconditional energy stability for nonlocal CahnHilliard model and its semidiscrete schemes carefully and rigorously. Secondly, what we need to focus on is that the nonlocality of the nonlocal diffusion term will lead the stiffness matrix to be almost full matrix which generates huge computational work and memory requirement. For spatial discretizaion by finite difference method, we find that the discretizaition for nonlocal operator will lead to a blockToeplitzToeplitzblock BTTB matrix by applying four transformation operators. Based on this special structure, we present a fast procedure to reduce the computational work and memory requirement. Finally, several numerical simulations are demonstrated to verify the accuracy and efficiency of our proposed schemes.
Machine learning acceleration of simulations of Stokesian suspensions ; Particulate Stokesian flows describe the hydrodynamics of rigid or deformable particles in Stokes flows. Due to highly nonlinear fluidstructure interaction dynamics, moving interfaces, and multiple scales, numerical simulations of such flows are challenging and expensive. In this Letter, we propose a generic machinelearningaugmented reduced model for these flows. Our model replaces expensive parts of a numerical scheme with multilayer perceptrons. Given the physical parameters of the particle, our model generalizes to arbitrary geometries and boundary conditions without the need to retrain the regression function. It is 10 times faster than a stateoftheart numerical scheme having the same number of degrees of freedom and can reproduce several features of the flow quite accurately. We illustrate the performance of our model on integral equation formulation of vesicle suspensions in two dimensions.
Diagnosing and Enhancing VAE Models ; Although variational autoencoders VAEs represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. In particular, it is commonly believed that Gaussian encoderdecoder assumptions reduce the effectiveness of VAEs in generating realistic samples. In this regard, we rigorously analyze the VAE objective, differentiating situations where this belief is and is not actually true. We then leverage the corresponding insights to develop a simple VAE enhancement that requires no additional hyperparameters or sensitive tuning. Quantitatively, this proposal produces crisp samples and stable FID scores that are actually competitive with a variety of GAN models, all while retaining desirable attributes of the original VAE architecture. A shorter version of this work will appear in the ICLR 2019 conference proceedings Dai and Wipf, 2019. The code for our model is available at httpsgithub.comdaib13 TwoStageVAE.
What could reinfection tell us about R0 a modeling casestudy of syphilis transmission ; Many infectious diseases can lead to reinfection. We examined the relationship between the prevalence of repeat infection and the basic reproductive number R0. First we solved a generic, deterministic compartmental model of reinfection to derive an analytic solution for the relationship. We then numerically solved a disease specific model of syphilis transmission that explicitly tracked reinfection. We derived a generic expression that reflects a nonlinear and monotonically increasing relationship between proportion reinfection and R0 and which is attenuated by entryexit rates and recovery i.e. treatment. Numerical simulations from the syphilis model aligned with the analytic relationship. Reinfection proportions could be used to understand how far regions are from epidemic control, and should be included as a routine indicator in infectious disease surveillance.
Stripe Tensor Compilation via the Nested Polyhedral Model ; Hardware architectures and machine learning ML libraries evolve rapidly. Traditional compilers often fail to generate highperformance code across the spectrum of new hardware offerings. To mitigate, engineers develop handtuned kernels for each ML library update and hardware upgrade. Unfortunately, this approach requires excessive engineering effort to scale or maintain with any degree of stateoftheart performance. Here we present a Nested Polyhedral Model for representing highly parallelizable computations with limited dependencies between iterations. This model provides an underlying framework for an intermediate representation IR called Stripe, amenable to standard compiler techniques while naturally modeling key aspects of modern ML computing. Stripe represents parallelism, efficient memory layout, and multiple compute units at a level of abstraction amenable to automatic optimization. We describe how Stripe enables a compiler for ML in the style of LLVM that allows independent development of algorithms, optimizations, and hardware accelerators. We also discuss the design exploration advantages of Stripe over kernel libraries and schedulebased or schedulespacebased code generation.
An endtoend Neural Network Framework for Text Clustering ; The unsupervised text clustering is one of the major tasks in natural language processing NLP and remains a difficult and complex problem. Conventional mboxmethods generally treat this task using separated steps, including text representation learning and clustering the representations. As an improvement, neural methods have also been introduced for continuous representation learning to address the sparsity problem. However, the multistep process still deviates from the unified optimization target. Especially the second step of cluster is generally performed with conventional methods such as kMeans. We propose a pure neural framework for text clustering in an endtoend manner. It jointly learns the text representation and the clustering model. Our model works well when the context can be obtained, which is nearly always the case in the field of NLP. We have our method mboxevaluated on two widely used benchmarks IMDB movie reviews for sentiment classification and 20Newsgroup for topic categorization. Despite its simplicity, experiments show the model outperforms previous clustering methods by a large margin. Furthermore, the model is also verified on English wiki dataset as a large corpus.
On bifurcation of the four Liouville tori in one generalized integrable model of the vortex dynamics ; The article deals with a generalized mathematical model of the dynamics of two point vortices in the BoseEinstein condensate enclosed in a harmonic trap, and of the dynamics of two point vortices in an ideal fluid bounded by a circular region. In the case of a positive vortex pair, which is of interest for physical experimental applications, a new bifurcation diagram is obtained, for which the bifurcation of four tori into one is indicated. The presence of bifurcations of three and four tori in the integrable model of vortex dynamics with positive intensities indicates a complex transition and the connection of bifurcation diagrams of both limit cases. Analytical results of this publication the bifurcation diagram, the reduction to a system with one degree of freedom, the stability analysis form the basis of computer simulation of absolute dynamics of vortices in a fixed coordinate system in the case of arbitrary values of the physical parameters of the model the intensities, the vortex interaction and etc.
On the use of Deep Autoencoders for Efficient Embedded Reinforcement Learning ; In autonomous embedded systems, it is often vital to reduce the amount of actions taken in the real world and energy required to learn a policy. Training reinforcement learning agents from high dimensional image representations can be very expensive and time consuming. Autoencoders are deep neural network used to compress high dimensional data such as pixelated images into small latent representations. This compression model is vital to efficiently learn policies, especially when learning on embedded systems. We have implemented this model on the NVIDIA Jetson TX2 embedded GPU, and evaluated the power consumption, throughput, and energy consumption of the autoencoders for various CPUGPU core combinations, frequencies, and model parameters. Additionally, we have shown the reconstructions generated by the autoencoder to analyze the quality of the generated compressed representation and also the performance of the reinforcement learning agent. Finally, we have presented an assessment of the viability of training these models on embedded systems and their usefulness in developing autonomous policies. Using autoencoders, we were able to achieve 45 times improved performance compared to a baseline RL agent with a convolutional feature extractor, while using less than 2W of power.
Lightcone continuousspin field in AdS space ; We develop further the general lightcone gauge approach in AdS space and apply it for studying continuousspin field. For such field, we find lightcone gauge Lagrangian and realization of relativistic symmetries. We find a simple realization of spin operators entering our approach. Generalization of our results to the gauge invariant Lagrangian formulation is also described. We conjecture that, in the framework of AdSCFT, the continuousspin AdS field is dual to lightray conformal operator. For some particular cases, our continuousspin field leads to reducible models. We note two reducible models. The first model consists of massive scalar, massless vector, and partial continuousspin field involving fields of all spins greater than one, while the second model consists of massive vector, massless spin2 field, and partial continuousspin field involving all fields of spins greater than two.
Learning Quadrangulated Patches For 3D Shape Processing ; We propose a system for surface completion and inpainting of 3D shapes using generative models, learnt on local patches. Our method uses a novel encoding of height map based local patches parameterized using 3D mesh quadrangulation of the low resolution input shape. This provides us sufficient amount of local 3D patches to learn a generative model for the task of repairing moderate sized holes. Following the ideas from the recent progress in 2D inpainting, we investigated both linear dictionary based model and convolutional denoising autoencoders based model for the task for inpainting, and show our results to be better than the previous geometry based method of surface inpainting. We validate our method on both synthetic shapes and real world scans.
Smatrix absolute optimization method for a perfect vertical waveguide grating coupler ; Vertical coupling using a diffraction grating is a convenient way to couple light into an optical waveguide. Several optimization approaches have been suggested in order to design such a coupler; however, most of them are implemented using algorithm based modelling. In this paper, we suggest an intuitive method based on Smatrix formalism for analytically optimize 3port vertical grating coupler devices. The suggested method is general and can be applied to any 3port coupler device in order to achieve an optimal design based on user constrains. The simplicity of the model allows reduction of the optimization to two variables and the location of an absolute optimal operation point in a 2D contour map. Accordingly, in an ideal device near 100 percent coupling efficiency and insignificant return loss could be achieved. Our model results show good agreement with numerical FDTD simulations and can predict the general tendencies and sensitivities of the device behavior to changes in design parameters. We further apply our model to a previously reported high contrast unidirectional grating coupler device and show that additional improvement in the coupling efficiency is achievable for that layout.
A quantum algorithm for evolving open quantum dynamics on quantum computing devices ; Designing quantum algorithms for simulating quantum systems has seen enormous progress, yet few studies have been done to develop quantum algorithms for open quantum dynamics despite its importance in modeling the systemenvironment interaction found in most realistic physical models. In this work we propose and demonstrate a general quantum algorithm to evolve open quantum dynamics on quantum computing devices. The Kraus operators governing the time evolution can be converted into unitary matrices with minimal dilation guaranteed by the Sz.Nagy theorem. This allows the evolution of the initial state through unitary quantum gates, while using significantly less resource than required by the conventional Stinespring dilation. We demonstrate the algorithm on an amplitude damping channel using the IBM Qiskit quantum simulator and the IBM Q 5 Tenerife quantum device. The proposed algorithm does not require particular models of dynamics or decomposition of the quantum channel, and thus can be easily generalized to other open quantum dynamical models.
Efficient GANbased method for cyberintrusion detection ; Ubiquitous anomalies endanger the security of our system constantly. They may bring irreversible damages to the system and cause leakage of privacy. Thus, it is of vital importance to promptly detect these anomalies. Traditional supervised methods such as Decision Trees and Support Vector Machine SVM are used to classify normality and abnormality. However, in some case the abnormal status are largely rarer than normal status, which leads to decision bias of these methods. Generative adversarial network GAN has been proposed to handle the case. With its strong generative ability, it only needs to learn the distribution of normal status, and identify the abnormal status through the gap between it and the learned distribution. Nevertheless, existing GANbased models are not suitable to process data with discrete values, leading to immense degradation of detection performance. To cope with the discrete features, in this paper, we propose an efficient GANbased model with specificallydesigned loss function. Experiment results show that our model outperforms stateoftheart models on discrete dataset and remarkably reduce the overhead.
Latetime acceleration and inflation in a Poincare gauge cosmological model ; A selfconsistent model which can unify Starobinsky inflation and the LambdaCDM model in the framework of Poincar'e gauge cosmology PGC is studied in this work, without extra inflaton and dark energy''. We start from the general nineparameter PGC Lagrangian and get two Friedmannlike analytical solutions with the certain ghost and tachyonfree conditions on the Lagrangian parameters. The scalar torsion hdetermined solution is consistent with Starobinsky cosmology in the earlytime and the pseudoscalar torsion fdetermined solution contains naturally a constant dark energy'' density, 3alpha128B0, which covers the LambdaCDM model in the latetime. According to the latest observations, we estimate the magnitudes of parameters alpha1simeq 8.07times1056, B0simeq5.76times1028GeV2 in the natural units.