text
stringlengths
62
2.94k
Granular Learning with Deep Generative Models using Highly Contaminated Data ; An approach to utilize recent advances in deep generative models for anomaly detection in a granular continuous sense on a realworld image dataset with quality issues is detailed using recent normalizing flow models, with implications in many other applicationsdomainsdata types. The approach is completely unsupervised no annotations available but qualitatively shown to provide accurate semantic labeling for images via heatmaps of the scaled loglikelihood overlaid on the images. When sorted based on the median values per image, clear trends in quality are observed. Furthermore, downstream classification is shown to be possible and effective via a weakly supervised approach using the loglikelihood output from a normalizing flow model as a training signal for a featureextracting convolutional neural network. The prelinear dense layer outputs on the CNN are shown to disentangle high level representations and efficiently cluster various quality issues. Thus, an entirely nonannotated fully unsupervised approach is shown possible for accurate estimation and classification of quality issues..
Invertible Generative Modeling using Linear Rational Splines ; Normalizing flows attempt to model an arbitrary probability distribution through a set of invertible mappings. These transformations are required to achieve a tractable Jacobian determinant that can be used in highdimensional scenarios. The first normalizing flow designs used coupling layer mappings built upon affine transformations. The significant advantage of such models is their easytocompute inverse. Nevertheless, making use of affine transformations may limit the expressiveness of such models. Recently, invertible piecewise polynomial functions as a replacement for affine transformations have attracted attention. However, these methods require solving a polynomial equation to calculate their inverse. In this paper, we explore using linear rational splines as a replacement for affine transformations used in coupling layers. Besides having a straightforward inverse, inference and generation have similar cost and architecture in this method. Moreover, simulation results demonstrate the competitiveness of this approach's performance compared to existing methods.
Proton Decay and Axion Dark Matter in SO10 Grand Unification via Minimal LeftRight Symmetry ; We study the proton lifetime in the SO10 Grand Unified Theory GUT, which has the leftright LR symmetric gauge theory below the GUT scale. In particular, we focus on the minimal model without the bidoublet Higgs field in the LR symmetric model, which predicts the LRbreaking scale at around 1010text12 GeV. The Wilson coefficients of the proton decay operators turn out to be considerably larger than those in the minimal SU5 GUT model especially when the Standard Model Yukawa interactions are generated by integrating out extra vectorlike multiplets. As a result, we find that the proton lifetime can be within the reach of the HyperKamiokande experiment even when the GUT gauge boson mass is in the 1016text17 GeV range. We also show that the mass of the extra vectorlike multiplets can be generated by the PecceiQuinn symmetry breaking in a consistent way with the axion dark matter scenario.
Learning StyleAware Symbolic Music Representations by Adversarial Autoencoders ; We address the challenging open problem of learning an effective latent space for symbolic music data in generative music modeling. We focus on leveraging adversarial regularization as a flexible and natural mean to imbue variational autoencoders with context information concerning music genre and style. Through the paper, we show how Gaussian mixtures taking into account music metadata information can be used as an effective prior for the autoencoder latent space, introducing the first Music Adversarial Autoencoder MusAE. The empirical analysis on a large scale benchmark shows that our model has a higher reconstruction accuracy than stateoftheart models based on standard variational autoencoders. It is also able to create realistic interpolations between two musical sequences, smoothly changing the dynamics of the different tracks. Experiments show that the model can organise its latent space accordingly to lowlevel properties of the musical pieces, as well as to embed into the latent variables the highlevel genre information injected from the prior distribution to increase its overall performance. This allows us to perform changes to the generated pieces in a principled way.
InsertionDeletion Transformer ; We propose the InsertionDeletion Transformer, a novel transformerbased neural architecture and training method for sequence generation. The model consists of two phases that are executed iteratively, 1 an insertion phase and 2 a deletion phase. The insertion phase parameterizes a distribution of insertions on the current output hypothesis, while the deletion phase parameterizes a distribution of deletions over the current output hypothesis. The training method is a principled and simple algorithm, where the deletion model obtains its signal directly onpolicy from the insertion model output. We demonstrate the effectiveness of our InsertionDeletion Transformer on synthetic translation tasks, obtaining significant BLEU score improvement over an insertiononly model.
The viable fG gravity models via reconstruction from the observations ; We reconstruct the viable fG gravity models from the observations and provide the analytic solutions that well describe our numerical results. In order to avoid unphysical challenges that occur during the numerical reconstruction, we generalize fG models into fGA, which is the simple extension of fG models with the introduction of a constant A parameter. We employ several observational data together with the stability condition, which reads d2fdG2 0 and must be satisfied in the latetime evolution of the universe, to give proper initial conditions for solving the perturbation equation. As a result, we obtain the analytic functions that match the numerical solutions. Furthermore, it might be interesting if one can find the physical origin of those analytic solutions and its cosmological implications.
Kingman's model with random mutation probabilities convergence and condensation I ; For a onelocus haploid infinite population with discrete generations, the celebrated Kingman's model describes the evolution of fitness distributions under the competition of selection and mutation, with a constant mutation probability. Letting mutation probabilities vary on generations reflects the influence of a random environment. This paper generalises Kingman's model by using a sequence of i.i.d. random mutation probabilities. For any distribution of the sequence, the weak convergence of fitness distributions to the globally stable equilibrium for any initial fitness distribution is proved. We define the condensation of the random model as that almost surely a positive proportion of the population travels to and condensates on the largest fitness value. The condensation may occur when selection is more favoured than mutation. A criterion is given to tell whether the condensation occurs or not.
Mobility Inference on LongTailed Sparse Trajectory ; Analyzing the urban trajectory in cities has become an important topic in data mining. How can we model the human mobility consisting of stay and travel from the raw trajectory data How can we infer such a mobility model from the single trajectory information How can we further generalize the mobility inference to accommodate the realworld trajectory data that is sparsely sampled over time In this paper, based on formal and rigid definitions of the staytravel mobility, we propose a single trajectory inference algorithm that utilizes a generic longtailed sparsity pattern in the largescale trajectory data. The algorithm guarantees a 100 precision in the staytravel inference with a provable lowerbound in the recall. Furthermore, we introduce an encoderdecoder learning architecture that admits multiple trajectories as inputs. The architecture is optimized for the mobility inference problem through customized embedding and learning mechanism. Evaluations with three trajectory data sets of 40 million urban users validate the performance guarantees of the proposed inference algorithm and demonstrate the superiority of our deep learning model, in comparison to wellknown sequence learning methods. On extremely sparse trajectories, the deep learning model achieves a 2times overall accuracy improvement from the single trajectory inference algorithm, through proven scalability and generalizability to largescale versatile training data.
Adversarial Attack on Community Detection by Hiding Individuals ; It has been demonstrated that adversarial graphs, i.e., graphs with imperceptible perturbations added, can cause deep graph models to fail on nodegraph classification tasks. In this paper, we extend adversarial graphs to the problem of community detection which is much more difficult. We focus on blackbox attack and aim to hide targeted individuals from the detection of deep graph community detection models, which has many applications in realworld scenarios, for example, protecting personal privacy in social networks and understanding camouflage patterns in transaction networks. We propose an iterative learning framework that takes turns to update two modules one working as the constrained graph generator and the other as the surrogate community detection model. We also find that the adversarial graphs generated by our method can be transferred to other learning based community detection models.
Standard Model Meets Gravity Electroweak Symmetry Breaking and Inflation ; We propose a model for combining the Standard Model SM with gravity. It relies on a nonminimal coupling of the Higgs field to the Ricci scalar and on the Palatini formulation of gravity. Without introducing any new degrees of freedom in addition to those of the SM and the graviton, this scenario achieves two goals. First, it generates the electroweak symmetry breaking by a nonperturbative gravitational effect. In this way, it does not only address the hierarchy problem but opens up the possibility to calculate the Higgs mass. Second, the model incorporates inflation at energies below the onset of strongcoupling of the theory. Provided that corrections due to new physics above the scale of inflation are not unnaturally large, we can relate inflationary parameters to data from collider experiments.
Planning for the Unexpected Explicitly Optimizing Motions for Ground Uncertainty in Running ; We propose a method to generate actuation plans for a reduced order, dynamic model of bipedal running. This method explicitly enforces robustness to ground uncertainty. The plan generated is not a fixed body trajectory that is aggressively stabilized instead, the plan interacts with the passive dynamics of the reduced order model to create emergent robustness. The goal is to create plans for legged robots that will be robust to imperfect perception of the environment, and to work with dynamics that are too complex to optimize in realtime. Working within this dynamic model of legged locomotion, we optimize a set of disturbance cases together with the nominal case, all with linked inputs. The input linking is nontrivial due to the hybrid dynamics of the running model but our solution is effective and has analytical gradients. The optimization procedure proposed is significantly slower than a standard trajectory optimization, but results in robust gaits that reject disturbances extremely effectively without any replanning required.
Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs ; Recent graphtotext models generate text from graphbased data using either global or local aggregation to learn node representations. Global node encoding allows explicit communication between two distant nodes, thereby neglecting graph topology as all nodes are directly connected. In contrast, local node encoding considers the relations between neighbor nodes capturing the graph structure, but it can fail to capture longrange relations. In this work, we gather both encoding strategies, proposing novel neural models which encode an input graph combining both global and local node contexts, in order to learn better contextualized node embeddings. In our experiments, we demonstrate that our approaches lead to significant improvements on two graphtotext datasets achieving BLEU scores of 18.01 on AGENDA dataset, and 63.69 on the WebNLG dataset for seen categories, outperforming stateoftheart models by 3.7 and 3.1 points, respectively.
PseudoBidirectional Decoding for Local Sequence Transduction ; Local sequence transduction LST tasks are sequence transduction tasks where there exists massive overlapping between the source and target sequences, such as Grammatical Error Correction GEC and spell or OCR correction. Previous work generally tackles LST tasks with standard sequencetosequence seq2seq models that generate output tokens from left to right and suffer from the issue of unbalanced outputs. Motivated by the characteristic of LST tasks, in this paper, we propose a simple but versatile approach named PseudoBidirectional Decoding PBD for LST tasks. PBD copies the corresponding representation of source tokens to the decoder as pseudo future context to enable the decoder to attends to its bidirectional context. In addition, the bidirectional decoding scheme and the characteristic of LST tasks motivate us to share the encoder and the decoder of seq2seq models. The proposed PBD approach provides right side context information for the decoder and models the inductive bias of LST tasks, reducing the number of parameters by half and providing good regularization effects. Experimental results on several benchmark datasets show that our approach consistently improves the performance of standard seq2seq models on LST tasks.
VaPar Synth A Variational Parametric Model for Audio Synthesis ; With the advent of datadriven statistical modeling and abundant computing power, researchers are turning increasingly to deep learning for audio synthesis. These methods try to model audio signals directly in the time or frequency domain. In the interest of more flexible control over the generated sound, it could be more useful to work with a parametric representation of the signal which corresponds more directly to the musical attributes such as pitch, dynamics and timbre. We present VaPar Synth a Variational Parametric Synthesizer which utilizes a conditional variational autoencoder CVAE trained on a suitable parametric representation. We demonstrate our proposed model's capabilities via the reconstruction and generation of instrumental tones with flexible control over their pitch.
Gravastar under the framework of Braneworld Gravity ; Gravastars have been considered as a serious alternative to black holes in the past couple of decades. Stable models of gravastar have been constructed in many of the alternate gravity models besides standard General Relativity GR. The RandallSundrum RS braneworld model has been a popular alternative to GR, specially in the cosmological and astrophysical context. Here we consider a gravastar model in RS brane gravity. The mathematical solutions in different regions have been obtained along with calculation of matching conditions. Various important physical parameters for the shell have been calculated and plotted to see their variation with radial distance. We also calculate and plot the surface redshift to check the stability of the gravastar within the purview of RS brane gravity.
HamiltonJacobi approach for ReggeTeitelboim cosmology ; The HamiltonJacobi formalism for a geodetic branelike universe described by the ReggeTeitelboim model is developed. We focus on the description of the complete set of Hamiltonians that ensure the integrability of the model in addition to obtaining the Hamilton principal function S. In order to do this, we avoid the secondorder in derivative nature of the model by appropriately defining a set of auxiliary variables that yields a firstorder Lagrangian. Being a linear in acceleration theory, this scheme unavoidably needs an adequate redefinition of the socalled Generalized Poisson Bracket in order to achieve the right evolution in the reduced phase space. Further this HamiltonJacobi framework also enables us to explore the quantum behaviour under a semiclassical approximation of the model. A comparison with the OstrogradskiHamilton method for constrained systems is also provided in detail.
ObjectNet Dataset Reanalysis and Correction ; Recently, Barbu et al introduced a dataset called ObjectNet which includes objects in daily life situations. They showed a dramatic performance drop of the state of the art object recognition models on this dataset. Due to the importance and implications of their results regarding generalization ability of deep models, we take a second look at their findings. We highlight a major problem with their work which is applying object recognizers to the scenes containing multiple objects rather than isolated objects. The latter results in around 2030 performance gain using our code. Compared with the results reported in the ObjectNet paper, we observe that around 1015 of the performance loss can be recovered, without any test time data augmentation. In accordance with Barbu et al.'s conclusions, however, we also conclude that deep models suffer drastically on this dataset. Thus, we believe that ObjectNet remains a challenging dataset for testing the generalization power of models beyond datasets on which they have been trained.
AR AutoRepair the Synthetic Data for Neural Machine Translation ; Compared with only using limited authentic parallel data as training corpus, many studies have proved that incorporating synthetic parallel data, which generated by back translation BT or forward translation FT, or selftraining, into the NMT training process can significantly improve translation quality. However, as a wellknown shortcoming, synthetic parallel data is noisy because they are generated by an imperfect NMT system. As a result, the improvements in translation quality bring by the synthetic parallel data are greatly diminished. In this paper, we propose a novel Auto Repair AR framework to improve the quality of synthetic data. Our proposed AR model can learn the transformation from low quality noisy input sentence to high quality sentence based on large scale monolingual data with BT and FT techniques. The noise in synthetic parallel data will be sufficiently eliminated by the proposed AR model and then the repaired synthetic parallel data can help the NMT models to achieve larger improvements. Experimental results show that our approach can effective improve the quality of synthetic parallel data and the NMT model with the repaired synthetic data achieves consistent improvements on both WMT14 ENDE and IWSLT14 DEEN translation tasks.
Inexpensive Domain Adaptation of Pretrained Language Models Case Studies on Biomedical NER and Covid19 QA ; Domain adaptation of Pretrained Language Models PTLMs is typically achieved by unsupervised pretraining on targetdomain text. While successful, this approach is expensive in terms of hardware, runtime and CO2 emissions. Here, we propose a cheaper alternative We train Word2Vec on targetdomain text and align the resulting word vectors with the wordpiece vectors of a generaldomain PTLM. We evaluate on eight biomedical Named Entity Recognition NER tasks and compare against the recently proposed BioBERT model. We cover over 60 of the BioBERTBERT F1 delta, at 5 of BioBERT's CO2 footprint and 2 of its cloud compute cost. We also show how to quickly adapt an existing generaldomain Question Answering QA model to an emerging domain the Covid19 pandemic.
Towards Evaluating the Robustness of Chinese BERT Classifiers ; Recent advances in largescale language representation models such as BERT have improved the stateoftheart performances in many NLP tasks. Meanwhile, characterlevel Chinese NLP models, including BERT for Chinese, have also demonstrated that they can outperform the existing models. In this paper, we show that, however, such BERTbased models are vulnerable under characterlevel adversarial attacks. We propose a novel Chinese charlevel attack method against BERTbased classifiers. Essentially, we generate small perturbation on the character level in the embedding space and guide the character substitution procedure. Extensive experiments show that the classification accuracy on a Chinese news dataset drops from 91.8 to 0 by manipulating less than 2 characters on average based on the proposed attack. Human evaluations also confirm that our generated Chinese adversarial examples barely affect human performance on these NLP tasks.
Classical Models of Entanglement in Monitored Random Circuits ; The evolution of entanglement entropy in quantum circuits composed of Haarrandom gates and projective measurements shows versatile behavior, with connections to phase transitions and complexity theory. We reformulate the problem in terms of a classical Markov process for the dynamics of bipartition purities and establish a probabilistic cellularautomaton algorithm to compute entanglement entropy in monitored random circuits on arbitrary graphs. In one dimension, we further relate the evolution of the entropy to a simple classical spin model that naturally generalizes a twodimensional lattice percolation problem. We also establish a Markov model for the evolution of the zeroth R'enyi entropy and demonstrate that, in one dimension and in the limit of large local dimension, it coincides with the corresponding secondR'enyientropy model. Finally, we extend the Markovian description to a more general setting that incorporates continuoustime dynamics, defined by stochastic Hamiltonians and weak local measurements continuously monitoring the system.
Building a Multidomain Neural Machine Translation Model using Knowledge Distillation ; Lack of specialized data makes building a multidomain neural machine translation tool challenging. Although emerging literature dealing with low resource languages starts to show promising results, most stateoftheart models used millions of sentences. Today, the majority of multidomain adaptation techniques are based on complex and sophisticated architectures that are not adapted for realworld applications. So far, no scalable method is performing better than the simple yet effective mixedfinetuning, i.e finetuning a generic model with a mix of all specialized data and generic data. In this paper, we propose a new training pipeline where knowledge distillation and multiple specialized teachers allow us to efficiently finetune a model without adding new costs at inference time. Our experiments demonstrated that our training pipeline allows improving the performance of multidomain translation over finetuning in configurations with 2, 3, and 4 domains by up to 2 points in BLEU.
Learning to Encode Evolutionary Knowledge for Automatic Commenting Long Novels ; Static knowledge graph has been incorporated extensively into sequencetosequence framework for text generation. While effectively representing structured context, static knowledge graph failed to represent knowledge evolution, which is required in modeling dynamic events. In this paper, an automatic commenting task is proposed for long novels, which involves understanding context of more than tens of thousands of words. To model the dynamic storyline, especially the transitions of the characters and their relations, Evolutionary Knowledge GraphEKG is proposed and learned within a multitask framework. Given a specific passage to comment, sequential modeling is used to incorporate historical and future embedding for context representation. Further, a graphtosequence model is designed to utilize the EKG for comment generation. Extensive experimental results show that our EKGbased method is superior to several strong baselines on both automatic and human evaluations.
Late time cosmological evolution in DHOST models ; We study the late cosmological evolution, from the nonrelativistic matter dominated era to the dark energy era, in modified gravity models described by Degenerate HigherOrder ScalarTensor DHOST theories. They represent the most general scalartensor theories propagating a single scalar degree of freedom and include Horndeski and Beyond Horndeski theories. We provide the homogeneous evolution equations for any quadratic DHOST theory, without restricting ourselves to theories where the speed of gravitational waves coincides with that of light since the present constraints apply to wavelengths much smaller than cosmological scales. To illustrate the potential richness of the cosmological background evolution in these theories, we consider a simple family of shiftsymmetric models, characterized by three parameters and compute the evolution of dark energy and of its equation of state. We also identify the regions in parameter space where the models are perturbatively stable.
Phase Transitions of Quintessential AdS Black Holes in MtheorySuperstring Inspired Models ; We study ddimensional AdS black holes surrounded by Dark Energy DE, embedded in Ddimensional Mtheorysuperstring inspired models having AdSd times mathbbSdk spacetime with D2dk. We focus on the thermodynamic HawkingPage phase transitions of quintessential DE black hole solutions, whose microscopical origin is linked to N coincident d2branes supposed to live in such 2dkdimensional models. Interpreting the cosmological constant as the number of colors propto Nfracd12, we calculate various thermodynamical quantities in terms of brane number, entropy and DE contributions. Computing the chemical potential conjugated to the number of colors in the absence of DE, we show that a generic black hole is more stable for a larger number of branes for lower dimensions d.In the presence of DE, we find that the DE state parameter omegaq should take particular values, for D,d,k models, providing a non trivial phase transition structure.
A topological model of composite preons from the minimal ideals of two Clifford algebras ; We demonstrate a direct correspondence between the basis states of the minimal ideals of the complex Clifford algebras mathbbCell6 and mathbbCell4, shown earlier to transform as a single generation of leptons and quarks under the Standard Model's unbroken SU3ctimes U1em and SU2L gauge symmetries respectively, and a simple topologicallybased toy model in which leptons, quarks, and gauge bosons are represented as elements of the braid group B3. It was previously shown that mapping the basis states of the minimal left ideals of mathbbCell6 to specific braids replicates precisely the simple topological structure describing electrocolor symmetries in an existing topological preon model. This paper extends these results to incorporate the chiral weak symmetry by including a mathbbCell4 algebra, and identifying the basis states of the minimal right ideals with simple braids. The braids corresponding to the charged vector bosons are determined, and it is demonstrated that weak interactions can be described via the composition of braids.
Cosmological models of generalized ghost pilgrim dark energy GGPDE in the gravitation theory of SaezBallester ; We are studying the mechanism of the cosmic model in the presence of GGPDE and matter in LRS Bianchi typeI spacetime by the utilization of new holographic DE in SaezBallester theory. Here we discuss all the data for three scenarios, first is supernovae type Ia union data, second is SN Ia data in combination with BAO and CMB observations and third is combination with OHD and JLA observations. From this, we get a model of our universe, where its transit state from deceleration to acceleration phase. Here we have observed that the results yielded by cosmological parameters like rho energy density, EoS equation of state, squared speed of sound vs2, omegaDomegaD' and rs plane is compatible with the recent observational data. The omegaDomegaD' trajectories in both thawing and freezing regions and the correspondence of the quintessence field with GGPD dark energy are discussed. Some physical aspects of the GGPDE models are also highlighted.
Difficulty Translation in Histopathology Images ; The unique nature of histopathology images opens the door to domainspecific formulations of image translation models. We propose a difficulty translation model that modifies colorectal histopathology images to be more challenging to classify. Our model comprises a scorer, which provides an output confidence to measure the difficulty of images, and an image translator, which learns to translate images from easytoclassify to hardtoclassify using a training set defined by the scorer. We present three findings. First, generated images were indeed harder to classify for both human pathologists and machine learning classifiers than their corresponding source images. Second, image classifiers trained with generated images as augmented data performed better on both easy and hard images from an independent test set. Finally, human annotator agreement and our model's measure of difficulty correlated strongly, implying that for future work requiring human annotator agreement, the confidence score of a machine learning classifier could be used as a proxy.
Lifelong Learning Process SelfMemory Supervising and Dynamically Growing Networks ; From childhood to youth, human gradually come to know the world. But for neural networks, this growing process seems difficult. Trapped in catastrophic forgetting, current researchers feed data of all categories to a neural network which remains the same structure in the whole training process. We compare this training process with human learing patterns, and find two major conflicts. In this paper, we study how to solve these conflicts on generative models based on the conditional variational autoencoderCVAE model. To solve the uncontinuous conflict, we apply memory playback strategy to maintain the model's recognizing and generating ability on invisible used categories. And we extend the traditional oneway CVAE to a circulatory mode to better accomplish memory playback strategy. To solve the dead' structure conflict, we rewrite the CVAE formula then are able to make a novel interpretation about the funtions of different parts in CVAE models. Based on the new understanding, we find ways to dynamically extend the network structure when training on new categories. We verify the effectiveness of our methods on MNIST and Fashion MNIST and display some very insteresting results.
Efficient numerical computation of the basic reproduction number for structured populations ; As widely known, the basic reproduction number plays a key role in weighing birthinfection and deathrecovery processes in several models of population dynamics. In this general setting, its characterization as the spectral radius of next generation operators is rather elegant, but simultaneously poses serious obstacles to its practical determination. In this work we address the problem numerically by reducing the relevant operators to matrices through a pseudospectral collocation, eventually computing the sought quantity by solving finitedimensional eigenvalue problems. The approach is illustrated for two classes of models, respectively from ecology and epidemiology. Several numerical tests demonstrate experimentally important features of the method, like fast convergence and influence of the smoothness of the models' coefficients. Examples of robust analysis of instances of specific models are also presented to show potentialities and ease of application.
GePpeTto Carves Italian into a Language Model ; In the last few years, pretrained neural architectures have provided impressive improvements across several NLP tasks. Still, generative language models are available mainly for English. We develop GePpeTto, the first generative language model for Italian, built using the GPT2 architecture. We provide a thorough analysis of GePpeTto's quality by means of both an automatic and a humanbased evaluation. The automatic assessment consists in i calculating perplexity across different genres and ii a profiling analysis over GePpeTto's writing characteristics. We find that GePpeTto's production is a sort of bonsai version of human production, with shorter but yet complex sentences. Human evaluation is performed over a sentence completion task, where GePpeTto's output is judged as natural more often than not, and much closer to the original human texts than to a simpler language model which we take as baseline.
Improved Natural Language Generation via Loss Truncation ; Neural language models are usually trained to match the distributional properties of a largescale corpus by minimizing the log loss. While straightforward to optimize, this approach forces the model to reproduce all variations in the dataset, including noisy and invalid references e.g., misannotation and hallucinated facts. Worse, the commonly used log loss is overly sensitive to such phenomena and even a small fraction of noisy data can degrade performance. In this work, we show that the distinguishability of the models and reference serves as a principled and robust alternative for handling invalid references. To optimize distinguishability, we propose loss truncation, which adaptively removes high loss examples during training. We show this is as easy to optimize as log loss and tightly bounds distinguishability under noise. Empirically, we demonstrate that loss truncation outperforms existing baselines on distinguishability on a summarization task, and show that samples generated by the loss truncation model have factual accuracy ratings that exceed those of baselines and match human references.
A Simple Language Model for TaskOriented Dialogue ; Taskoriented dialogue is often decomposed into three tasks understanding user input, deciding actions, and generating a response. While such decomposition might suggest a dedicated model for each subtask, we find a simple, unified approach leads to stateoftheart performance on the MultiWOZ dataset. SimpleTOD is a simple approach to taskoriented dialogue that uses a single, causal language model trained on all subtasks recast as a single sequence prediction problem. This allows SimpleTOD to fully leverage transfer learning from pretrained, open domain, causal language models such as GPT2. SimpleTOD improves over the prior stateoftheart in joint goal accuracy for dialogue state tracking, and our analysis reveals robustness to noisy annotations in this setting. SimpleTOD also improves the main metrics used to evaluate action decisions and response generation in an endtoend setting inform rate by 8.1 points, success rate by 9.7 points, and combined score by 7.2 points.
Comprehensive model and performance optimization of phaseonly spatial light modulators ; Several spurious effects are known to degrade the performance of phaseonly spatial light modulators. We introduce a comprehensive model that takes into account the major ones curvature of the back panel, pixel crosstalk and the internal FabryPerot cavity. To estimate the model parameters with high accuracy, we generate blazed grating patterns and acquire the intensity response curves of the first and second diffraction orders. The quantitative model is used to generate compensating holograms, which can produce optical modes with high fidelity.
Neural Syntactic Preordering for Controlled Paraphrase Generation ; Paraphrasing natural language sentences is a multifaceted process it might involve replacing individual words or short phrases, local rearrangement of content, or highlevel restructuring like topicalization or passivization. Past approaches struggle to cover this space of paraphrase possibilities in an interpretable manner. Our work, inspired by preordering literature in machine translation, uses syntactic transformations to softly reorder'' the source sentence and guide our neural paraphrasing model. First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoderdecoder model. This model operates over a partially lexical, partially syntactic view of the sentence and can reorder big chunks. Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoderdecoder paraphrase model to attend to the source words in a particular order. Our evaluation, both automatic and human, shows that the proposed system retains the quality of the baseline approaches while giving a substantial increase in the diversity of the generated paraphrases
Comment on The wavedriven current in coastal canopies by M. Abdolahpour et al ; Laboratory and field measurements made over the past decade have shown the presence of a strong wavedriven mean current in submerged vegetation canopies. Luhar et al. 2010 suggested that this mean current is analogous to the streaming flow generated in wave boundary layers over bare beds, and developed a simple energy and momentum balance model to predict its magnitude. However, this model predicts that the magnitude of the mean current does not depend on canopy spatial density, which is inconsistent with the measurements made by Abdolahpour et al. 2017 in recent laboratory experiments. Motivated by observations that the wavedriven mean flow is most pronounced at the canopy interface, Abdolahpour et al. 2017 proposed an alternate explanation for its origin that it is driven by the vertical heterogeneity in orbital motion created by canopy drag. Such heterogeneity can give rise to incomplete particle orbits near the canopy interface and a Lagrangian mean current analogous to Stokes drift in the direction of wave propagation. A model guided by this physical insight and dimensional analysis is able to generate much more accurate predictions. This comment aims to reconcile these two different models for the wavedriven mean flow in submerged canopies.
Efficient Characterization of Dynamic Response Variation Using MultiFidelity Data Fusion through Composite Neural Network ; Uncertainties in a structure is inevitable, which generally lead to variation in dynamic response predictions. For a complex structure, brute force Monte Carlo simulation for response variation analysis is infeasible since one single run may already be computationally costly. Data driven metamodeling approaches have thus been explored to facilitate efficient emulation and statistical inference. The performance of a metamodel hinges upon both the quality and quantity of training dataset. In actual practice, however, highfidelity data acquired from highdimensional finite element simulation or experiment are generally scarce, which poses significant challenge to metamodel establishment. In this research, we take advantage of the multilevel response prediction opportunity in structural dynamic analysis, i.e., acquiring rapidly a large amount of lowfidelity data from reducedorder modeling, and acquiring accurately a small amount of highfidelity data from fullscale finite element analysis. Specifically, we formulate a composite neural network fusion approach that can fully utilize the multilevel, heterogeneous datasets obtained. It implicitly identifies the correlation of the low and highfidelity datasets, which yields improved accuracy when compared with the stateoftheart. Comprehensive investigations using frequency response variation characterization as case example are carried out to demonstrate the performance.
Lower bounds in multiple testing A framework based on derandomized proxies ; The large bulk of work in multiple testing has focused on specifying procedures that control the false discovery rate FDR, with relatively less attention being paid to the corresponding Type II error known as the false nondiscovery rate FNR. A line of more recent work in multiple testing has begun to investigate the tradeoffs between the FDR and FNR and to provide lower bounds on the performance of procedures that depend on the model structure. Lacking thus far, however, has been a general approach to obtaining lower bounds for a broad class of models. This paper introduces an analysis strategy based on derandomization, illustrated by applications to various concrete models. Our main result is metatheorem that gives a general recipe for obtaining lower bounds on the combination of FDR and FNR. We illustrate this metatheorem by deriving explicit bounds for several models, including instances with dependence, scaletransformed alternatives, and nonGaussianlike distributions. We provide numerical simulations of some of these lower bounds, and show a close relation to the actual performance of the BenjaminiHochberg BH algorithm.
A SelfTraining Method for Machine Reading Comprehension with Soft Evidence Extraction ; Neural models have achieved great success on machine reading comprehension MRC, many of which typically consist of two components an evidence extractor and an answer predictor. The former seeks the most relevant information from a reference text, while the latter is to locate or generate answers from the extracted evidence. Despite the importance of evidence labels for training the evidence extractor, they are not cheaply accessible, particularly in many nonextractive MRC tasks such as YESNO question answering and multichoice MRC. To address this problem, we present a SelfTraining method STM, which supervises the evidence extractor with autogenerated evidence labels in an iterative process. At each iteration, a base MRC model is trained with golden answers and noisy evidence labels. The trained model will predict pseudo evidence labels as extra supervision in the next iteration. We evaluate STM on seven datasets over three MRC tasks. Experimental results demonstrate the improvement on existing MRC models, and we also analyze how and why such a selftraining method works in MRC. The source code can be obtained from httpsgithub.comSparkJiaoSelfTrainingMRC
A radiatively induced neutrino mass model with hidden local U1 and LFV processes elli to ellj , to e Z' and e to e e ; We investigate a model based on hidden U1X gauge symmetry in which neutrino mass is induced at oneloop level by effects of interactions among particles in hidden sector and the Standard Model leptons. Neutrino mass generation is also associated with U1X breaking scale which is taken to be low to suppress neutrino mass. Then we formulate neutrino mass matrix, lepton flavor violating processes and muon g2 which are induced via interactions among Standard Model leptons and particles in U1X hidden sector that can be sizable in our scenario. Carrying our numerical analysis, we show expected ratios for these processes when generated neutrino mass matrix can fit the neutrino data.
Fostering Event Compression using Gated Surprise ; Our brain receives a dynamically changing stream of sensorimotor data. Yet, we perceive a rather organized world, which we segment into and perceive as events. Computational theories of cognitive science on eventpredictive cognition suggest that our brain forms generative, eventpredictive models by segmenting sensorimotor data into suitable chunks of contextual experiences. Here, we introduce a hierarchical, surprisegated recurrent neural network architecture, which models this process and develops compact compressions of distinct eventlike contexts. The architecture contains a contextual LSTM layer, which develops generative compressions of ongoing and subsequent contexts. These compressions are passed into a GRUlike layer, which uses surprise signals to update its recurrent latent state. The latent state is passed forward into another LSTM layer, which processes actual dynamic sensory flow in the light of the provided latent, contextual compression signals. Our model shows to develop distinct event compressions and achieves the best performance on multiple event processing tasks. The architecture may be very useful for the further development of resourceefficient learning, hierarchical modelbased reinforcement learning, as well as the development of artificial eventpredictive cognition and intelligence.
Probabilistic Hyperproperties with Nondeterminism ; We study the problem of formalizing and checking probabilistic hyperproperties for models that allow nondeterminism in actions. We extend the temporal logic HyperPCTL, which has been previously introduced for discretetime Markov chains, to enable the specification of hyperproperties also for Markov decision processes. We generalize HyperPCTL by allowing explicit and simultaneous quantification over schedulers and probabilistic computation trees and show that it can express important quantitative requirements in security and privacy. We show that HyperPCTL model checking over MDPs is in general undecidable for quantification over probabilistic schedulers with memory, but restricting the domain to memoryless nonprobabilistic schedulers turns the model checking problem decidable. Subsequently, we propose an SMTbased encoding for model checking this language and evaluate its performance.
A Complete Cosmological Scenario in Teleparallel Gravity ; Teleparallel gravity is a modified theory of gravity in which the Ricci scalar R of the Lagrangian replaced by the general function of torsion scalar T in action. With that, cosmology in teleparallel gravity becomes profoundly simplified because it is secondorder theory. The article present a complete cosmological scenario in fT gravity with fTTbetaTalpha, where alpha, and beta are model parameters. We present the profiles of energy density, pressure, and equation of state EoS parameter. Next to this, we employ statefinder diagnostics to check deviation from the LambdaCDM model as well as the nature of dark energy. Finally, we discuss the energy conditions to check the consistency of our model and observe that SEC violates in the present model supporting the acceleration of the Universe as per present observation.
Lagrangian description of cosmic fluids mapping dark energy into unified dark energy ; We investigate the appropriateness of the use of different Lagrangians to describe various components of the cosmic energy budget, discussing the degeneracies between them in the absence of nonminimal couplings to gravity or other fields, and clarifying some misconceptions in the literature. We further demonstrate that these degeneracies are generally broken for nonminimal coupled fluids, in which case the identification of the appropriate onshell Lagrangian may become essential in order characterize the overall dynamics. We then show that models with the same onshell Lagrangian may have different proper energy densities and use this result to map dark energy models into unified dark energy models in which dark matter and dark energy are described by the same perfect fluid. We determine the correspondence between their equation of state parameters and sound speeds, briefly discussing the linear sound speed problem of unified dark energy models as well as a possible way out associated to the nonlinear dynamics.
Truncated Simulation and Inference in EdgeExchangeable Networks ; Edgeexchangeable probabilistic network models generate edges as an i.i.d.sequence from a discrete measure, providing a simple means for statistical inference of latent network properties. The measure is often constructed using the selfproduct of a realization from a Bayesian nonparametric BNP discrete prior; but unlike in standard BNP models, the selfproduct measure prior is not conjugate the likelihood, hindering the development of exact simulation and inference algorithms. Approximation via finite truncation of the discrete measure is a straightforward alternative, but incurs an unknown approximation error. In this paper, we develop methods for forward simulation and posterior inference in random selfproductmeasure models based on truncation, and provide theoretical guarantees on the quality of the results as a function of the truncation level. The techniques we present are general and extend to the broader class of discrete Bayesian nonparametric models.
GPTtoo A languagemodelfirst approach for AMRtotext generation ; Meaning Representations AMRs are broadcoverage sentencelevel semantic graphs. Existing approaches to generating text from AMR have focused on training sequencetosequence or graphtosequence models on AMR annotated data only. In this paper, we propose an alternative approach that combines a strong pretrained language model with cycle consistencybased rescoring. Despite the simplicity of the approach, our experimental results show these models outperform all previous techniques on the English LDC2017T10dataset, including the recent use of transformer architectures. In addition to the standard evaluation metrics, we provide human evaluation experiments that further substantiate the strength of our approach.
Local and global stability analysis of a CurzonAhlborn model applied to power plants working at maximum kefficient power ; The analysis of the effect of noisy perturbations on real heat engines, working on any steadystate regime has been a topic of interest within the context of FiniteTime Thermodynamics FTT. The study of their local stability has been proposed through the socalled performance regimes maximum power output, maximum ecological function, among others. Recently, the global stability analysis of an endoreversible heat engine was also studied taking into account the same performance regimes. We present a study of local and global stability analysis of power plant models the CurzonAhlborn model operating on a generalized efficient power regime called maximum kefficient power. We apply the Lyapunov stability theory to construct the Lyapunov functions to prove the asymptotically stable behavior of the steadystate of intermediate temperatures in the CurzonAhlborn model. We consider the effect of a linear heat transfer law on the phase portrait description of real power plants, as well as the role of the k parameter in the evolution of perturbations to heat flow. In general, restructured operation conditions show better stability in external perturbations.
Crosslingual Multispeaker TexttoSpeech under LimitedData Scenario ; Modeling voices for multiple speakers and multiple languages in one texttospeech system has been a challenge for a long time. This paper presents an extension on Tacotron2 to achieve bilingual multispeaker speech synthesis when there are limited data for each language. We achieve crosslingual synthesis, including codeswitching cases, between English and Mandarin for monolingual speakers. The two languages share the same phonemic representations for input, while the language attribute and the speaker identity are independently controlled by language tokens and speaker embeddings, respectively. In addition, we investigate the model's performance on the crosslingual synthesis, with and without a bilingual dataset during training. With the bilingual dataset, not only can the model generate highfidelity speech for all speakers concerning the language they speak, but also can generate accented, yet fluent and intelligible speech for monolingual speakers regarding nonnative language. For example, the Mandarin speaker can speak English fluently. Furthermore, the model trained with bilingual dataset is robust for codeswitching texttospeech, as shown in our results and provided samples.httpscaizexin.github.iomlmssynsamplesindex.html.
Beta PoissonG Family of Distributions Its Properties and Application with Failure Time Data ; A new generalization of the family of PoissonG is called beta PoissonG family of distribution. Useful expansions of the probability density function and the cumulative distribution function of the proposed family are derived and seen as infinite mixtures of the PoissonG distribution. Moment generating function, power moments, entropy, quantile function, skewness and kurtosis are investigated. Numerical computation of moments, skewness, kurtosis and entropy are tabulated for select parameter values. Furthermore, estimation by methods of maximum likelihood is discussed. A simulation study is carried at under varying sample size to assess the performance of this model. Finally suitability check of the proposed model in comparison to its recently introduced models is carried out by considering two real life data sets modeling.
The OpenCitations Data Model ; A variety of schemas and ontologies are currently used for the machinereadable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model OCDM, a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
Predictive Modeling of Periodic Behavior for HumanRobot Symbiotic Walking ; We propose in this paper Periodic Interaction Primitives a probabilistic framework that can be used to learn compact models of periodic behavior. Our approach extends existing formulations of Interaction Primitives to periodic movement regimes, i.e., walking. We show that this model is particularly wellsuited for learning datadriven, customized models of human walking, which can then be used for generating predictions over future states or for inferring latent, biomechanical variables. We also demonstrate how the same framework can be used to learn controllers for a robotic prosthesis using an imitation learning approach. Results in experiments with human participants indicate that Periodic Interaction Primitives efficiently generate predictions and ankle angle control signals for a robotic prosthetic ankle, with MAE of 2.21 degrees in 0.0008s per inference. Performance degrades gracefully in the presence of noise or sensor fall outs. Compared to alternatives, this algorithm functions 20 times faster and performed 4.5 times more accurately on test subjects.
NonSUSY Gepner Models with Vanishing Cosmological Constant ; In this article we discuss a construction of nonSUSY type II string vacua with the vanishing cosmological constant at the one loop level based on the generic Gepner models for CalabiYau 3folds. We make an orbifolding of the Gepner models by Z2 times Z4, which asymmetrically acts with some discrete torsions incorporated. We demonstrate that the obtained type II string vacua indeed lead to the vanishing cosmological constant at the one loop, whereas any spacetime supercharges cannot be constructed as long as assuming the chiral forms such as 'QalphaL equiv oint dz JLalphaz'. We further discuss possible generalizations of the models described above.
Global Guidance for Local Generalization in Model Checking ; SMTbased model checkers, especially IC3style ones, are currently the most effective techniques for verification of infinite state systems. They infer global inductive invariants via local reasoning about a single step of the transition relation of a system, while employing SMTbased procedures, such as interpolation, to mitigate the limitations of local reasoning and allow for better generalization. Unfortunately, these mitigations intertwine model checking with heuristics of the underlying SMTsolver, negatively affecting stability of model checking. In this paper, we propose to tackle the limitations of locality in a systematic manner. We introduce explicit global guidance into the local reasoning performed by IC3style algorithms. To this end, we extend the SMTIC3 paradigm with three novel rules, designed to mitigate fundamental sources of failure that stem from locality. We instantiate these rules for the theory of Linear Integer Arithmetic and implement them on top of SPACER solver in Z3. Our empirical results show that GSPACER, SPACER extended with global guidance, is significantly more effective than both SPACER and sole global reasoning, and, furthermore, is insensitive to interpolation.
Inflation and Reheating in fR,h theory formulated in the Palatini formalism ; A new model for inflation using modified gravity in the Palatini formalism is constructed. Here nonminimal coupling of scalar field h with the curvature R as a general function fR,h is considered. Explicit inflation models for some choices of fR,h are developed. By writing an equivalent scalartensor action for this model and going over to Einstein frame, slow roll parameters are constructed. There exists a large parameter space which satisfies values of ns and limits on r compatible with Planck 2018 data. Further, we calculate reheating temperature and the number of efolds at the end of reheating for different values of equation of state parameter for all the constructed models.
Machine Learning for Observables Reactant to Product State Distributions for AtomDiatom Collisions ; Machine learningbased models to predict product state distributions from a distribution of reactant conditions for atomdiatom collisions are presented and quantitatively tested. The models are based on function, kernel and gridbased representations of the reactant and product state distributions. While all three methods predict final state distributions from explicit quasiclassical trajectory simulations with R2 0.998, the gridbased approach performs best. Although a functionbased approach is found to be more than two times better in computational performance, the kernel and gridbased approaches are preferred in terms of prediction accuracy, practicability and generality. The functionbased approach also suffers from lacking a general set of model functions. Applications of the gridbased approach to nonequilibrium, multitemperature initial state distributions are presented, a situation common to energy distributions in hypersonic flows. The role of such models in Direct Simulation Monte Carlo and computational fluid dynamics simulations is also discussed.
Equivalence of inflationary models between the metric and Palatini formulation of scalartensor theories ; With a scalar field nonminimally coupled to curvature, the underlying geometry and variational principle of gravity metric or Palatini becomes important and makes a difference, as the field dynamics and observational predictions generally depend on this choice. In the present paper we describe a classification principle which encompasses both metric and Palatini models of inflation, employing the fact that inflationary observables can be neatly expressed in terms of certain quantities which remain invariant under conformal transformations and scalar field redefinitions. This allows us to elucidate the specific conditions when a model yields equivalent phenomenology in the metric and Palatini formalisms, and also to outline a method how to systematically construct different models in both formulations that produce the same observables.
Crossdiffusion induced patterns for a singlestep enzymatic reaction ; Several different enzymes display an apparent diffusion coefficient that increases with the concentration of their substrate. Moreover, their motion becomes directed in substrate gradients. Currently, there are several competing models for these transport dynamics. Here, we analyze whether the enzymatic reactions can generate a significant feedback from enzyme transport onto the substrate profile. We find that this feedback can generate spatial patterns in the enzyme distribution, with just a singlestep catalytic reaction. However, patterns are formed only for a subclass of transport models. For such models, nonspecific repulsive interactions between the enzyme and the substrate cause the enzyme to accumulate in regions of low substrate concentration. Reactions then amplify local substrate fluctuations, causing enzymes to further accumulate where substrate is low. Experimental analysis of this pattern formation process could discriminate between different transport models.
Flexible Bayesian Modelling for Nonlinear Image Registration ; We describe a diffeomorphic registration algorithm that allows groups of images to be accurately aligned to a common space, which we intend to incorporate into the SPM software. The idea is to perform inference in a probabilistic graphical model that accounts for variability in both shape and appearance. The resulting framework is general and entirely unsupervised. The model is evaluated at intersubject registration of 3D human brain scans. Here, the main modeling assumption is that individual anatomies can be generated by deforming a latent 'average' brain. The method is agnostic to imaging modality and can be applied with no prior processing. We evaluate the algorithm using freely available, manually labelled datasets. In this validation we achieve stateoftheart results, within reasonable runtimes, against previous stateoftheart widely used, intersubject registration algorithms. On the unprocessed dataset, the increase in overlap score is over 17. These results demonstrate the benefits of using informative computational anatomy frameworks for nonlinear registration.
Comparison Theorems of Phylogenetic Spaces and the Moduli Spaces of Curves ; Rapid developments in genetics and biology have led to phylogenetic methods becoming an important direction in the study of cancer and viral evolution. Although our understanding of gene biology and biochemistry has increased and is increasing at a remarkable rate, the theoretical models of genetic evolution still use the phylogenetic tree model that was introduced by Darwin in 1859 and the generalization to phylogenetic networks introduced by Grant in 1971. Darwin's model uses phylogenetic trees to capture the evolutionary relationships of reproducing individuals 6; Grant's generalization to phylogenetic networks is meant to account for the phenomena of horizontal gene transfer 14. Therefore, it is important to provide an accurate mathematical description of these models and to understand their connection with other fields of mathematics. In this article, we focus on the graph theoretical aspects of phylogenetic trees and networks and their connection to stable curves. We introduce the building blocks of evolutionary moduli spaces, the dual intersection complex of the moduli spaces of stable curves, and the categorical relationship between the phylogenetic spaces and stable curves in overlinemathfrakM0,nmathbbC and overlinemathfrakM0,nmathbbR. We also show that the space of network topologies maps injectively into the boundary of overlinemathfrakMg,nmathbbC.
Estimates of dissipation of wave energy by sea ice for a field experiment in the Southern Ocean, using modeldata inversion ; A modeldata inversion is applied to a very large observational dataset collected in the Southern Ocean north of the Ross Sea during late autumn to early winter, producing estimates of the frequencydependent rate of dissipation by sea ice. The modeling platform is WAVEWATCH IIIR which accounts for nonstationarity, advection, wave generation, and other relevant processes. The resulting 9477 dissipation profiles are colocated with other variables such as ice thickness to quantify correlations which might be exploited in later studies to improve predictions. Mean dissipation profiles from the inversion are fitted to simple binomials. Variability about the mean profile is not small, but the binomials show remarkable qualitative similarity to prior observationbased estimates of dissipation, and the power dependence is consistent with at least three theoretical models, one of which assumes that dissipation is dominated by turbulence generated by shear at the icewater interface.
Simulating exotic phases of matter with bonddirected interactions with arrays of MajoranaCooper pair boxes ; It is suggested that networks of MajoranaCooper pair boxes connected by metallic nanowires can simulate various exotic states of matter. In this simulations MajoranaCooper boxes play the role of effective spins S12 and the metallic connections generate the Kondo screening and the RudermanKittelKasuyaYosida RKKY interaction. Depending on what prevails whether it is the Kondo effect or the RKKY exchange, one will have either an effective spin model or a Kondo lattice. The list of exotic stets includes the famous hexagonal Kitaev model, a generalization of this model for a Kondo lattice and various spin models with threespin interactions. A special emphasize is made on the discussion of the Kondo lattice scaenario.
MultiImage Summarization Textual Summary from a Set of Cohesive Images ; Multisentence summarization is a well studied problem in NLP, while generating image descriptions for a single image is a well studied problem in Computer Vision. However, for applications such as image cluster labeling or web page summarization, summarizing a set of images is also a useful and challenging task. This paper proposes the new task of multiimage summarization, which aims to generate a concise and descriptive textual summary given a coherent set of input images. We propose a model that extends the imagecaptioning Transformerbased architecture for single image to multiimage. A dense average image feature aggregation network allows the model to focus on a coherent subset of attributes across the input images. We explore various input representations to the Transformer network and empirically show that aggregated image features are superior to individual image embeddings. We additionally show that the performance of the model is further improved by pretraining the model parameters on a singleimage captioning task, which appears to be particularly effective in eliminating hallucinations in the output.
N1 Modelling of Lifestyle Impact on SleepPerformance ; Sleep is critical to leading a healthy lifestyle. Each day, most people go to sleep without any idea about how their night's rest is going to be. For an activity that humans spend around a third of their life doing, there is a surprising amount of mystery around it. Despite current research, creating personalized sleep models in realworld settings has been challenging. Existing literature provides several connections between daily activities and sleep quality. Unfortunately, these insights do not generalize well in many individuals. For these reasons, it is important to create a personalized sleep model. This research proposes a sleep model that can identify causal relationships between daily activities and sleep quality and present the user with specific feedback about how their lifestyle affects their sleep. Our method uses Nof1 experiments on longitudinal user data and event mining to generate understanding between lifestyle choices exercise, eating, circadian rhythm and their impact on sleep quality. Our experimental results identified and quantified relationships while extracting confounding variables through a causal framework. These insights can be used by the user or a personal health navigator to provide guidance in improving sleep.
Consistency Guided Scene Flow Estimation ; Consistency Guided Scene Flow Estimation CGSF is a selfsupervised framework for the joint reconstruction of 3D scene structure and motion from stereo video. The model takes two temporal stereo pairs as input, and predicts disparity and scene flow. The model selfadapts at test time by iteratively refining its predictions. The refinement process is guided by a consistency loss, which combines stereo and temporal photoconsistency with a geometric term that couples disparity and 3D motion. To handle inherent modeling error in the consistency loss e.g. Lambertian assumptions and for better generalization, we further introduce a learned, output refinement network, which takes the initial predictions, the loss, and the gradient as input, and efficiently predicts a correlated output update. In multiple experiments, including ablation studies, we show that the proposed model can reliably predict disparity and scene flow in challenging imagery, achieves better generalization than the stateoftheart, and adapts quickly and robustly to unseen domains.
Evidences of the Generalizations of BKT Transition in Quantum Clock Model ; We calculate the ground state energy density epsilong for the one dimensional Nstate quantum clock model up to order 18, where g is the coupling and N3,4,5,...,10,20. Using methods based on Pad'e approximation, we extract the singular structure of epsilon''g or epsilong. They correspond to the specific heat and free energy of the classical 2D clock model. We find that, for N3,4, there is a single critical point at gc1.The heat capacity exponent of the corresponding 2D classical model is alpha0.34pm0.01 for N3, and alpha0.01pm 0.01 for N4. For N4, There are two exponential singularities related by gc11gc2, and epsilong behaves as Aefraccgcgsigmaanalytic terms near gc. The exponent sigma gradually grows from 0.2 to 0.5 as N increases from 5 to 9, and it stabilizes at 0.5 when N9. These phase transitions should be generalizations of KosterlitzThouless transition, which has sigma0.5. The physical pictures of these phase transitions are still unclear.
Facial Expression Editing with Continuous Emotion Labels ; Recently deep generative models have achieved impressive results in the field of automated facial expression editing. However, the approaches presented so far presume a discrete representation of human emotions and are therefore limited in the modelling of nondiscrete emotional expressions. To overcome this limitation, we explore how continuous emotion representations can be used to control automated expression editing. We propose a deep generative model that can be used to manipulate facial expressions in facial images according to continuous twodimensional emotion labels. One dimension represents an emotion's valence, the other represents its degree of arousal. We demonstrate the functionality of our model with a quantitative analysis using classifier networks as well as with a qualitative analysis.
A General Class of Transfer Learning Regression without Implementation Cost ; We propose a novel framework that unifies and extends existing methods of transfer learning TL for regression. To bridge a pretrained source model to the model on a target task, we introduce a densityratio reweighting function, which is estimated through the Bayesian framework with a specific prior distribution. By changing two intrinsic hyperparameters and the choice of the densityratio model, the proposed method can integrate three popular methods of TL TL based on crossdomain similarity regularization, a probabilistic TL using the densityratio estimation, and finetuning of pretrained neural networks. Moreover, the proposed method can benefit from its simple implementation without any additional cost; the regression model can be fully trained using offtheshelf libraries for supervised learning in which the original output variable is simply transformed to a new output variable. We demonstrate its simplicity, generality, and applicability using various real data applications.
Modeling Baroque TwoPart Counterpoint with Neural Machine Translation ; We propose a system for contrapuntal music generation based on a Neural Machine Translation NMT paradigm. We consider Baroque counterpoint and are interested in modeling the interaction between any two given parts as a mapping between a given source material and an appropriate target material. Like in translation, the former imposes some constraints on the latter, but doesn't define it completely. We collate and edit a bespoke dataset of Baroque pieces, use it to train an attentionbased neural network model, and evaluate the generated output via BLEU score and musicological analysis. We show that our model is able to respond with some idiomatic trademarks, such as imitation and appropriate rhythmic offset, although it falls short of having learned stylistically correct contrapuntal motion e.g., avoidance of parallel fifths or stricter imitative rules, such as canon.
Conditional particle filters with diffuse initial distributions ; Conditional particle filters CPFs are powerful smoothing algorithms for general nonlinearnonGaussian hidden Markov models. However, CPFs can be inefficient or difficult to apply with diffuse initial distributions, which are common in statistical applications. We propose a simple but generally applicable auxiliary variable method, which can be used together with the CPF in order to perform efficient inference with diffuse initial distributions. The method only requires simulatable Markov transitions that are reversible with respect to the initial distribution, which can be improper. We focus in particular on randomwalk type transitions which are reversible with respect to a uniform initial distribution on some domain, and autoregressive kernels for Gaussian initial distributions. We propose to use online adaptations within the methods. In the case of randomwalk transition, our adaptations use the estimated covariance and acceptance rate adaptation, and we detail their theoretical validity. We tested our methods with a linearGaussian randomwalk model, a stochastic volatility model, and a stochastic epidemic compartment model with timevarying transmission rate. The experimental findings demonstrate that our method works reliably with little user specification, and can be substantially better mixing than a direct particle Gibbs algorithm that treats initial states as parameters.
Approaching offdiagonal longrange order for 11dimensional relativistic anyons ; We construct and study relativistic anyons in 11 dimensions generalizing wellknown models of Dirac fermions. First, a model of free anyons is constructed and then extended in two ways i by adding densitydensity interactions, as in the Luttinger model, and ii by coupling the free anyons to a U1gauge field, as in the Schwinger model. Second, physical properties of these extensions are studied. By investigating offdiagonal longrange order ODLRO at zero temperature, we show that anyonic statistics allows one to get arbitrarily close to ODLRO but that this possibility is destroyed by the gauge coupling. The latter is due to a nonzero effective mass generated by gauge invariance, which we show also implies the presence of screening, independently of the anyonic statistics.
CDM without cosmological constant ; A type of exponential correction to General Relativity gives viable modified gravity model of dark energy. The model behaves as R2Lambda at large curvature where an effective cosmological constant appears, but it becomes zero in flat space time. The cosmic evolution of the main density parameters is consistent with current observations. The thin shell conditions for the Solar system were analyzed. Apart from satisfying cosmological and local gravity restrictions, the model may also show measurable differences with LambdaCDM at recent times. The current value of the deviation parameter m for scales relevant to the matter power spectrum can be larger than 106. The growth index of matter density perturbations is clearly different from that of the LambdaCDM. The theoretical predictions of the model for the weighted growth rate were analyzed in the light of the fsigma8tension.
Graph2Kernel GridLSTM A MultiCued Model for Pedestrian Trajectory Prediction by Learning Adaptive Neighborhoods ; Pedestrian trajectory prediction is a prominent research track that has advanced towards modelling of crowd social and contextual interactions, with extensive usage of Long ShortTerm Memory LSTM for temporal representation of walking trajectories. Existing approaches use virtual neighborhoods as a fixed grid for pooling social states of pedestrians with tuning process that controls how social interactions are being captured. This entails performance customization to specific scenes but lowers the generalization capability of the approaches. In our work, we deploy textitGridLSTM, a recent extension of LSTM, which operates over multidimensional feature inputs. We present a new perspective to interaction modeling by proposing that pedestrian neighborhoods can become adaptive in design. We use textitGridLSTM as an encoder to learn about potential future neighborhoods and their influence on pedestrian motion given the visual and the spatial boundaries. Our model outperforms stateoftheart approaches that collate resembling features over several publiclytested surveillance videos. The experiment results clearly illustrate the generalization of our approach across datasets that varies in scene features and crowd dynamics.
Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity ; Training neural network models with discrete categorical or structured latent variables can be computationally challenging, due to the need for marginalization over large or combinatorial sets. To circumvent this issue, one typically resorts to samplingbased approximations of the true marginal, requiring noisy gradient estimators e.g., score function estimator or continuous relaxations with lowervariance reparameterized gradients e.g., GumbelSoftmax. In this paper, we propose a new training strategy which replaces these estimators by an exact yet efficient marginalization. To achieve this, we parameterize discrete distributions over latent assignments using differentiable sparse mappings sparsemax and its structured counterparts. In effect, the support of these distributions is greatly reduced, which enables efficient marginalization. We report successful results in three tasks covering a range of latent variable modeling applications a semisupervised deep generative model, a latent communication game, and a generative model with a bitvector latent representation. In all cases, we obtain good performance while still achieving the practicality of samplingbased approximations.
Unsupervised Learning of Lagrangian Dynamics from Images for Prediction and Control ; Recent approaches for modelling dynamics of physical systems with neural networks enforce Lagrangian or Hamiltonian structure to improve prediction and generalization. However, when coordinates are embedded in highdimensional data such as images, these approaches either lose interpretability or can only be applied to one particular example. We introduce a new unsupervised neural network model that learns Lagrangian dynamics from images, with interpretability that benefits prediction and control. The model infers Lagrangian dynamics on generalized coordinates that are simultaneously learned with a coordinateaware variational autoencoder VAE. The VAE is designed to account for the geometry of physical systems composed of multiple rigid bodies in the plane. By inferring interpretable Lagrangian dynamics, the model learns physical system properties, such as kinetic and potential energy, which enables longterm prediction of dynamics in the image space and synthesis of energybased controllers.
Anisotropic Diffusion and Traveling Waves of Toxic Proteins in Neurodegenerative Diseases ; Neurodegenerative diseases are closely associated with the amplification and invasion of toxic proteins. In particular Alzheimer's disease is characterized by the systematic progression of amyloidbeta and tauproteins in the brain. These two protein families are coupled and it is believed that their joint presence greatly enhances the resulting damage. Here, we examine a class of coupled chemical kinetics models of healthy and toxic proteins in two spatial dimensions. The anisotropic diffusion expected to take place within the brain along axonal pathways is factored in the models and produces a filamentary, predominantly onedimensional transmission. Nevertheless, the potential of the anisotropic models towards generating interactions taking advantage of the twodimensional landscape is showcased. Finally, a reduction of the models into a simpler family of generalized FisherKolmogorovPetrovskiiPiskunov FKPP type systems is examined. It is seen that the latter captures well the qualitative propagation features, although it may somewhat underestimate the concentrations of the toxic proteins.
Learning Spoken Language Representations with Neural Lattice Language Modeling ; Pretrained language models have achieved huge improvement on many NLP tasks. However, these methods are usually designed for written text, so they do not consider the properties of spoken language. Therefore, this paper aims at generalizing the idea of language model pretraining to lattices generated by recognition systems. We propose a framework that trains neural lattice language models to provide contextualized representations for spoken language understanding tasks. The proposed twostage pretraining approach reduces the demands of speech data and has better efficiency. Experiments on intent detection and dialogue act recognition datasets demonstrate that our proposed method consistently outperforms strong baselines when evaluated on spoken inputs. The code is available at httpsgithub.comMiuLabLatticeELMo.
Early recombination as a solution to the H0 tension ; We show that the H0 tension can be resolved by making recombination earlier, keeping the fit to cosmic microwave background CMB data almost intact. We provide a suite of general necessary conditions to give a good fit to CMB data while realizing a high value of H0 suggested by local measurements. As a concrete example for a successful scenario with early recombination, we demonstrate that a model with timevarying me can indeed satisfy all the conditions. We further show that such a model can also be well fitted to lowz distance measurements of baryon acoustic oscillation BAO and typeIa supernovae SNeIa with a simple extension of the model. Timevarying me in the framework of OmegakLambdaCDM is found to be a sufficient and excellent example as a solution to the H0 tension, yielding H072.32.8 2.7,kmsecMpc from the combination of CMB, BAO and SNeIa data even without incorporating any direct local H0 measurements. Apart from the H0 tension, this model is also favored from the viewpoint of the CMB lensing anomaly.
Contour Models for Boundaries Enclosing StarShaped and Approximately StarShaped Polygons ; Boundaries on spatial fields divide regions with particular features from surrounding background areas. These boundaries are often described with contour lines. To measure and record these boundaries, contours are often represented as ordered sequences of spatial points that connect to form a line. Methods to identify boundary lines from interpolated spatial fields are wellestablished. Less attention has been paid to how to model sequences of connected spatial points. For data of the latter form, we introduce the Gaussian Starshaped Contour Model GSCM. GSMCs generate sequences of spatial points via generating sets of distances in various directions from a fixed starting point. The GSCM is designed for modeling contours that enclose regions that are starshaped polygons or approximately starshaped polygons. Metrics are introduced to assess the extent to which a polygon deviates from starshaped. Simulation studies illustrate the performance of the GSCM in various scenarios and an analysis of Arctic sea ice edge contour data highlights how GSCMs can be applied to observational data.
Advances of TransformerBased Models for News Headline Generation ; Pretrained language models based on Transformer architecture are the reason for recent breakthroughs in many areas of NLP, including sentiment analysis, question answering, named entity recognition. Headline generation is a special kind of text summarization task. Models need to have strong natural language understanding that goes beyond the meaning of individual words and sentences and an ability to distinguish essential information to succeed in it. In this paper, we finetune two pretrained Transformerbased models mBART and BertSumAbs for that task and achieve new stateoftheart results on the RIA and Lenta datasets of Russian news. BertSumAbs increases ROUGE on average by 2.9 and 2.0 points respectively over previous best score achieved by PhraseBased Attentional Transformer and CopyNet.
Image Captioning with Compositional Neural Module Networks ; In image captioning where fluency is an important factor in evaluation, e.g., ngram metrics, sequential models are commonly used; however, sequential models generally result in overgeneralized expressions that lack the details that may be present in an input image. Inspired by the idea of the compositional neural module networks in the visual question answering task, we introduce a hierarchical framework for image captioning that explores both compositionality and sequentiality of natural language. Our algorithm learns to compose a detailrich sentence by selectively attending to different modules corresponding to unique aspects of each object detected in an input image to include specific descriptions such as counts and color. In a set of experiments on the MSCOCO dataset, the proposed model outperforms a stateofthe art model across multiple evaluation metrics, more importantly, presenting visually interpretable results. Furthermore, the breakdown of subcategories fscores of the SPICE metric and human evaluation on Amazon Mechanical Turk show that our compositional module networks effectively generate accurate and detailed captions.
MEvolve StructuralMappingBased Data Augmentation for Graph Classification ; Graph classification, which aims to identify the category labels of graphs, plays a significant role in drug classification, toxicity detection, protein analysis etc. However, the limitation of scale in the benchmark datasets makes it easy for graph classification models to fall into overfitting and undergeneralization. To improve this, we introduce data augmentation on graphs i.e. graph augmentation and present four methodsrandom mapping, vertexsimilarity mapping, motifrandom mapping and motifsimilarity mapping, to generate more weakly labeled data for smallscale benchmark datasets via heuristic transformation of graph structures. Furthermore, we propose a generic model evolution framework, named MEvolve, which combines graph augmentation, data filtration and model retraining to optimize pretrained graph classifiers. Experiments on six benchmark datasets demonstrate that the proposed framework helps existing graph classification models alleviate overfitting and undergeneralization in the training on smallscale benchmark datasets, which successfully yields an average improvement of 3 13 accuracy on graph classification tasks.
Generalization of Deep Convolutional Neural Networks A Casestudy on Opensource Chest Radiographs ; Deep Convolutional Neural Networks DCNNs have attracted extensive attention and been applied in many areas, including medical image analysis and clinical diagnosis. One major challenge is to conceive a DCNN model with remarkable performance on both internal and external data. We demonstrate that DCNNs may not generalize to new data, but increasing the quality and heterogeneity of the training data helps to improve the generalizibility factor. We use InceptionResNetV2 and DenseNet121 architectures to predict the risk of 5 common chest pathologies. The experiments were conducted on three publicly available databases CheXpert, ChestXray14, and MIMIC Chest Xray JPG. The results show the internal performance of each of the 5 pathologies outperformed external performance on both of the models. Moreover, our strategy of exposing the models to a mix of different datasets during the training phase helps to improve model performance on the external dataset.
Efficient resource management in UAVs for Visual Assistance ; There is an increased interest in the use of Unmanned Aerial Vehicles UAVs for agriculture, military, disaster management and aerial photography around the world. UAVs are scalable, flexible and are useful in various environments where direct human intervention is difficult. In general, the use of UAVs with cameras mounted to them has increased in number due to their wide range of applications in real life scenarios. With the advent of deep learning models in computer vision many models have shown great success in visual tasks. But most of evaluation models are done on high end CPUs and GPUs. One of major challenges in using UAVs for Visual Assistance tasks in real time is managing the memory usage and power consumption of the these tasks which are computationally intensive and are difficult to be performed on low end processor board of the UAV. This projects describes a novel method to optimize the general image processing tasks like object tracking and object detection for UAV hardware in real time scenarios without affecting the flight time and not tampering the latency and accuracy of these models.
Exploring the Evolution of GANs through Quality Diversity ; Generative adversarial networks GANs achieved relevant advances in the field of generative algorithms, presenting highquality results mainly in the context of images. However, GANs are hard to train, and several aspects of the model should be previously designed by hand to ensure training success. In this context, evolutionary algorithms such as COEGAN were proposed to solve the challenges in GAN training. Nevertheless, the lack of diversity and premature optimization can be found in some of these solutions. We propose in this paper the application of a qualitydiversity algorithm in the evolution of GANs. The solution is based on the Novelty Search with Local Competition NSLC algorithm, adapting the concepts used in COEGAN to this new proposal. We compare our proposal with the original COEGAN model and with an alternative version using a global competition approach. The experimental results evidenced that our proposal increases the diversity of the discovered solutions and leverage the performance of the models found by the algorithm. Furthermore, the global competition approach was able to consistently find better models for GANs.
Multitask NonAutoregressive Model for Human Motion Prediction ; Human motion prediction, which aims at predicting future human skeletons given the past ones, is a typical sequencetosequence problem. Therefore, extensive efforts have been continued on exploring different RNNbased encoderdecoder architectures. However, by generating target poses conditioned on the previously generated ones, these models are prone to bringing issues such as error accumulation problem. In this paper, we argue that such issue is mainly caused by adopting autoregressive manner. Hence, a novel NonauToregressive Model NAT is proposed with a complete nonautoregressive decoding scheme, as well as a context encoder and a positional encoding module. More specifically, the context encoder embeds the given poses from temporal and spatial perspectives. The frame decoder is responsible for predicting each future pose independently. The positional encoding module injects positional signal into the model to indicate temporal order. Moreover, a multitask training paradigm is presented for both lowlevel human skeleton prediction and highlevel human action recognition, resulting in the convincing improvement for the prediction task. Our approach is evaluated on Human3.6M and CMUMocap benchmarks and outperforms stateoftheart autoregressive methods.
Embedded EncoderDecoder in Convolutional Networks Towards Explainable AI ; Understanding intermediate layers of a deep learning model and discovering the driving features of stimuli have attracted much interest, recently. Explainable artificial intelligence XAI provides a new way to open an AI black box and makes a transparent and interpretable decision. This paper proposes a new explainable convolutional neural network XCNN which represents important and driving visual features of stimuli in an endtoend model architecture. This network employs encoderdecoder neural networks in a CNN architecture to represent regions of interest in an image based on its category. The proposed model is trained without localization labels and generates a heatmap as part of the network architecture without extra postprocessing steps. The experimental results on the CIFAR10, Tiny ImageNet, and MNIST datasets showed the success of our algorithm XCNN to make CNNs explainable. Based on visual assessment, the proposed model outperforms the current algorithms in classspecific feature representation and interpretable heatmap generation while providing a simple and flexible network architecture. The initial success of this approach warrants further study to enhance weakly supervised localization and semantic segmentation in explainable frameworks.
Stationarity and ergodic properties for some observationdriven models in random environments ; The first motivation of this paper is to study stationarity and ergodic properties for a general class of time series models defined conditional on an exogenous covariates process. The dynamic of these models is given by an autoregressive latent process which forms a Markov chain in random environments. Contrarily to existing contributions in the field of Markov chains in random environments, the state space is not discrete and we do not use small set type assumptions or uniform contraction conditions for the random Markov kernels. Our assumptions are quite general and allows to deal with models that are not fully contractive, such as threshold autoregressive processes. Using a coupling approach, we study the existence of a limit, in Wasserstein metric, for the backward iterations of the chain. We also derive ergodic properties for the corresponding skewproduct Markov chain. Our results are illustrated with many examples of autoregressive processes widely used in statistics or in econometrics, including GARCH type processes, count autoregressions and categorical time series.
Small free field inflation in higher curvature gravity ; Within General Relativity, a minimally coupled scalar field governed by a quadratic potential is able to produce an accelerated expansion of the universe provided its value and excursion are larger than the Planck scale. This is an archetypical example of the so called large field inflation models. We show that by including higher curvature corrections to the gravitational action in the form of the Geometric Inflation models, it is possible to obtain accelerated expansion with a free scalar field whose values are well below the Planck scale, thereby turning a traditional large field model into a small field one. We provide the conditions the theory has to satisfy in order for this mechanism to operate, and we present two explicit models illustrating it. Finally, we present some open questions raised by this scenario in which inflation takes place completely in a higher curvature dominated regime, such as those concerning the study of perturbations.
Overcomplete order3 tensor decomposition, blind deconvolution and Gaussian mixture models ; We propose a new algorithm for tensor decomposition, based on Jennrich's algorithm, and apply our new algorithmic ideas to blind deconvolution and Gaussian mixture models. Our first contribution is a simple and efficient algorithm to decompose certain symmetric overcomplete order3 tensors, that is, three dimensional arrays of the form T sumi1n ai otimes ai otimes ai where the ais are not linearly independent.Our algorithm comes with a detailed robustness analysis. Our second contribution builds on top of our tensor decomposition algorithm to expand the family of Gaussian mixture models whose parameters can be estimated efficiently. These ideas are also presented in a more general framework of blind deconvolution that makes them applicable to mixture models of identical but very general distributions, including all centrally symmetric distributions with finite 6th moment.
Weakly Supervised Instance Segmentation by Learning Annotation Consistent Instances ; Recent approaches for weakly supervised instance segmentations depend on two components i a pseudo label generation model that provides instances which are consistent with a given annotation; and ii an instance segmentation model, which is trained in a supervised manner using the pseudo labels as groundtruth. Unlike previous approaches, we explicitly model the uncertainty in the pseudo label generation process using a conditional distribution. The samples drawn from our conditional distribution provide accurate pseudo labels due to the use of semantic class aware unary terms, boundary aware pairwise smoothness terms, and annotation aware higher order terms. Furthermore, we represent the instance segmentation model as an annotation agnostic prediction distribution. In contrast to previous methods, our representation allows us to define a joint probabilistic learning objective that minimizes the dissimilarity between the two distributions. Our approach achieves state of the art results on the PASCAL VOC 2012 data set, outperforming the best baseline by 4.2 mAP0.5 and 4.8 mAP0.75.
Claiming trend in toxicological and pharmacological doseresponse studies an overview on statistical methods and related RSoftware ; There are very different statistical methods for demonstrating a trend in pharmacological experiments. Here, the focus is on sparse models with only one parameter to be estimated and interpreted the increase in the regression model and the difference to control in the contrast model. This provides both pvalues and confidence intervals for an appropriate effect size. A combined test consisting of the Tukey regression approach and the multiple contrast test according to Williams is recommended, which can be generalized to the generalized linear mixed effect model. Thus numerous variable types occurring in pharmacologytoxicology can be adequately evaluated. Software is available through CRAN packages. The most significant limitation of this approach is for designs with very small sample sizes, often in pharmacologytoxicology.
On matrix models and their qdeformations ; Motivated by the BPSCFT correspondence, we explore the similarities between the classical betadeformed Hermitean matrix model and the qdeformed matrix models associated to 3d mathcalN2 supersymmetric gauge theories on D2timesqS1 and Sb3 by matching parameters of the theories. The novel results that we obtain are the correlators for the models, together with an additional result in the classical case consisting of the Walgebra representation of the generating function. Furthermore, we also obtain surprisingly simple expressions for the expectation values of characters which generalize previously known results.
Accounting for Unobserved Confounding in Domain Generalization ; This paper investigates the problem of learning robust, generalizable prediction models from a combination of multiple datasets and qualitative assumptions about the underlying datagenerating model. Part of the challenge of learning robust models lies in the influence of unobserved confounders that void many of the invariances and principles of minimum error presently used for this problem. Our approach is to define a different invariance property of causal solutions in the presence of unobserved confounders which, through a relaxation of this invariance, can be connected with an explicit distributionally robust optimization problem over a set of affine combination of data distributions. Concretely, our objective takes the form of a standard loss, plus a regularization term that encourages partial equality of error derivatives with respect to model parameters. We demonstrate the empirical performance of our approach on healthcare data from different modalities, including image, speech and tabular data.
Creating a Largescale Synthetic Dataset for Human Activity Recognition ; Creating and labelling datasets of videos for use in training Human Activity Recognition models is an arduous task. In this paper, we approach this by using 3D rendering tools to generate a synthetic dataset of videos, and show that a classifier trained on these videos can generalise to real videos. We use five different augmentation techniques to generate the videos, leading to a wide variety of accurately labelled unique videos. We fine tune a pretrained I3D model on our videos, and find that the model is able to achieve a high accuracy of 73 on the HMDB51 dataset over three classes. We also find that augmenting the HMDB training set with our dataset provides a 2 improvement in the performance of the classifier. Finally, we discuss possible extensions to the dataset, including virtual try on and modeling motion of the people.
DeepClone Modeling Clones to Generate Code Predictions ; Programmers often reuse code from source code repositories to reduce the development effort. Code clones are candidates for reuse in exploratory or rapid development, as they represent often repeated functionality in software systems. To facilitate code clone reuse, we propose DeepClone, a novel approach utilizing a deep learning algorithm for modeling code clones to predict the next set of tokens possibly a complete clone method body based on the code written so far. The predicted tokens require minimal customization to fit the context. DeepClone applies natural language processing techniques to learn from a large code corpus, and generates code tokens using the model learned. We have quantitatively evaluated our solution to assess 1 our model's quality and its accuracy in token prediction, and 2 its performance and effectiveness in clone method prediction. We also discuss various application scenarios for our approach.
ShapeCD ChangePoint Detection in TimeSeries Data with Shapes and Neurons ; Changepoint detection in a time series aims to discover the time points at which some unknown underlying physical process that generates the timeseries data has changed. We found that existing approaches become less accurate when the underlying process is complex and generates large varieties of patterns in the time series. To address this shortcoming, we propose ShapeCD, a simple, fast, and accurate change point detection method. ShapeCD uses shapebased features to model the patterns and a conditional neural field to model the temporal correlations among the time regions. We evaluated the performance of ShapeCD using four highly dynamic timeseries datasets, including the ExtraSensory dataset with up to 2000 classes. ShapeCD demonstrated improved accuracy 760 higher in AUC and faster computational speed compared to existing approaches. Furthermore, the ShapeCD model consists of only hundreds of parameters and require less data to train than other deep supervised learning models.
Dynamic Relational Inference in MultiAgent Trajectories ; Inferring interactions from multiagent trajectories has broad applications in physics, vision and robotics. Neural relational inference NRI is a deep generative model that can reason about relations in complex dynamics without supervision. In this paper, we take a careful look at this approach for relational inference in multiagent trajectories. First, we discover that NRI can be fundamentally limited without sufficient longterm observations. Its ability to accurately infer interactions degrades drastically for short output sequences. Next, we consider a more general setting of relational inference when interactions are changing overtime. We propose an extension ofNRI, which we call the DYnamic multiAgentRelational Inference DYARI model that can reason about dynamic relations. We conduct exhaustive experiments to study the effect of model architecture, underlying dynamics and training scheme on the performance of dynamic relational inference using a simulated physics system. We also showcase the usage of our model on realworld multiagent basketball trajectories.
Modified gravity with disappearing cosmological constant ; New corrections to General Relativity are considered in the context of modified fR gravity, that satisfy cosmological and local gravity constraints. The proposed models behave asymptotically as R2Lambda at large curvature and show the vanishing of the cosmological constant at the flat spacetime limit. The chameleon mechanism and thin shell restrictions for local systems were analyzed, and bounds on the models were found. The steepness of the deviation parameter m at late times leads to measurable signal of scalartensor regime in matter perturbations, that allows to detect departures form the LambdaCDM model. The theoretical results for the evolution of the weighted growth rate fsigma8z, from the proposed models, were analyzed.
Runaway potentials in warm inflation satisfying the swampland conjectures ; The runaway potentials, which do not possess any critical points, are viable potentials which befit the recently proposed de Sitter swampland conjecture very well. In this work, we embed such potentials in the warm inflation scenario motivated by quantum field theory models generating a dissipation coefficient with a dependence cubic in the temperature. It is demonstrated that such models are able to remain in tune with the present observations and they can also satisfy all the three Swampland conjectures, namely the Swampland Distance conjecture, the de Sitter conjecture and the Transplackian Censorship Conjecture, simultaneously. These features make such models viable from the point of view of effective field theory models in quantum gravity and string theory, away from the Swampland.
DIETERpy a Python framework for The Dispatch and Investment Evaluation Tool with Endogenous Renewables ; DIETER is an opensource power sector model designed to analyze future settings with very high shares of variable renewable energy sources. It minimizes overall system costs, including fixed and variable costs of various generation, flexibility and sector coupling options. Here we introduce DIETERpy that builds on the existing model version, written in the General Algebraic Modeling System GAMS, and enhances it with a Python framework. This combines the flexibility of Python regarding pre and postprocessing of data with a straightforward algebraic formulation in GAMS and the use of efficient solvers. DIETERpy also offers a browserbased graphical user interface. The new framework is designed to be easily accessible as it enables users to run the model, alter its configuration, and define numerous scenarios without a deeper knowledge of GAMS. Code, data, and manuals are available in public repositories under permissive licenses for transparency and reproducibility.