text
stringlengths
62
2.94k
AGQA A Benchmark for Compositional SpatioTemporal Reasoning ; Visual events are a composition of temporal actions involving actors spatially interacting with objects. When developing computer vision models that can reason about compositional spatiotemporal events, we need benchmarks that can analyze progress and uncover shortcomings. Existing video question answering benchmarks are useful, but they often conflate multiple sources of error into one accuracy metric and have strong biases that models can exploit, making it difficult to pinpoint model weaknesses. We present Action Genome Question Answering AGQA, a new benchmark for compositional spatiotemporal reasoning. AGQA contains 192M unbalanced question answer pairs for 9.6K videos. We also provide a balanced subset of 3.9M question answer pairs, 3 orders of magnitude larger than existing benchmarks, that minimizes bias by balancing the answer distributions and types of question structures. Although human evaluators marked 86.02 of our questionanswer pairs as correct, the best model achieves only 47.74 accuracy. In addition, AGQA introduces multiple trainingtest splits to test for various reasoning abilities, including generalization to novel compositions, to indirect references, and to more compositional steps. Using AGQA, we evaluate modern visual reasoning systems, demonstrating that the best models barely perform better than nonvisual baselines exploiting linguistic biases and that none of the existing models generalize to novel compositions unseen during training.
Bayesian estimation of dynamic weights in Gaussian mixture models ; This paper proposes a generalization of Gaussian mixture models, where the mixture weight is allowed to behave as an unknown function of time. This model is capable of successfully capturing the features of the data, as demonstrated by simulated and real datasets. It can be useful in studies such as clustering, changepoint and process control. In order to estimate the mixture weight function, we propose two new Bayesian nonlinear dynamic approaches for polynomial models, that can be extended to other problems involving polynomial nonlinear dynamic models. One of the methods, called here componentwise MetropolisHastings, apply the MetropolisHastings algorithm to each local level component of the state equation. It is more general and can be used in any situation where the observation and state equations are nonlinearly connected. The other method tends to be faster, but is applied specifically to binary data using the probit link function. The performance of these methods of estimation, in the context of the proposed dynamic Gaussian mixture model, is evaluated through simulated datasets. Also, an application to an array Comparative Genomic Hybridization aCGH dataset from glioblastoma cancer illustrates our proposal, highlighting the ability of the method to detect chromosome aberrations.
Fast Regression of the Tritium Breeding Ratio in Fusion Reactors ; The tritium breeding ratio TBR is an essential quantity for the design of modern and nextgeneration DT fueled nuclear fusion reactors. Representing the ratio between tritium fuel generated in breeding blankets and fuel consumed during reactor runtime, the TBR depends on reactor geometry and material properties in a complex manner. In this work, we explored the training of surrogate models to produce a cheap but highquality approximation for a Monte Carlo TBR model in use at the UK Atomic Energy Authority. We investigated possibilities for dimensional reduction of its feature space, reviewed 9 families of surrogate models for potential applicability, and performed hyperparameter optimisation. Here we present the performance and scaling properties of these models, the fastest of which, an artificial neural network, demonstrated R20.985 and a mean prediction time of 0.898 mumathrms, representing a relative speedup of 8cdot 106 with respect to the expensive MC model. We further present a novel adaptive sampling algorithm, QualityAdaptive Surrogate Sampling, capable of interfacing with any of the individually studied surrogates. Our preliminary testing on a toy TBR theory has demonstrated the efficacy of this algorithm for accelerating the surrogate modelling process.
A Mechanical Model for Magnetized Relativistic Blastwaves ; The evolution of a relativistic blastwave is usually delineated under the assumption of pressure balance between forward and reverseshocked regions. However, such a treatment usually violates the energy conservation law, and is inconsistent with existing MHD numerical simulation results. A mechanical model of nonmagnetized blastwaves was proposed in previous work to solve the problem. In this paper, we generalize the mechanical model to the case of a blastwave driven by an ejecta with an arbitrary magnetization parameter sigmarm ej. We test our modified mechanical model by considering a longlasting magnetized ejecta and found that it is much better than the pressurebalance treatment in terms of energy conservation. For a constant central engine wind luminosity Lrm ej 1047rm ergs1 and sigmarm ej 10, the deviation from energy conservation is negligibly small at small radii, but only reaches less than 25 even at 1019rm cm from the central engine. For a finite life time of the central engine, the reverse shock crosses the magnetized ejecta earlier for the ejecta with a higher sigmarm ej, which is consistent with previous analytical and numerical results. In general, the mechanical model is more precise than the traditional analytical models with results closer to those of numerical simulations.
Domain adaptation based selfcorrection model for COVID19 infection segmentation in CT images ; The capability of generalization to unseen domains is crucial for deep learning models when considering realworld scenarios. However, current available medical image datasets, such as those for COVID19 CT images, have large variations of infections and domain shift problems. To address this issue, we propose a prior knowledge driven domain adaptation and a dualdomain enhanced selfcorrection learning scheme. Based on the novel learning schemes, a domain adaptation based selfcorrection model DASCNet is proposed for COVID19 infection segmentation on CT images. DASCNet consists of a novel attention and feature domain enhanced domain adaptation model AFDDA to solve the domain shifts and a selfcorrection learning process to refine segmentation results. The innovations in AFDDA include an imagelevel activation feature extractor with attention to lung abnormalities and a multilevel discrimination module for hierarchical feature domain alignment. The proposed selfcorrection learning process adaptively aggregates the learned model and corresponding pseudo labels for the propagation of aligned source and target domain information to alleviate the overfitting to noises caused by pseudo labels. Extensive experiments over three publicly available COVID19 CT datasets demonstrate that DASCNet consistently outperforms stateoftheart segmentation, domain shift, and coronavirus infection segmentation methods. Ablation analysis further shows the effectiveness of the major components in our model. The DASCNet enriches the theory of domain adaptation and selfcorrection learning in medical imaging and can be generalized to multisite COVID19 infection segmentation on CT images for clinical deployment.
All Tokens Matter Token Labeling for Training Better Vision Transformers ; In this paper, we present token labeling a new training objective for training highperformance vision transformers ViTs. Different from the standard training objective of ViTs that computes the classification loss on an additional trainable class token, our proposed one takes advantage of all the image patch tokens to compute the training loss in a dense manner. Specifically, token labeling reformulates the image classification problem into multiple tokenlevel recognition problems and assigns each patch token with an individual locationspecific supervision generated by a machine annotator. Experiments show that token labeling can clearly and consistently improve the performance of various ViT models across a wide spectrum. For a vision transformer with 26M learnable parameters serving as an example, with token labeling, the model can achieve 84.4 Top1 accuracy on ImageNet. The result can be further increased to 86.4 by slightly scaling the model size up to 150M, delivering the minimalsized model among previous models 250M reaching 86. We also show that token labeling can clearly improve the generalization capability of the pretrained models on downstream tasks with dense prediction, such as semantic segmentation. Our code and all the training details will be made publicly available at httpsgithub.comzihangJiangTokenLabeling.
Restoring degraded speech via a modified diffusion model ; There are many deterministic mathematical operations e.g. compression, clipping, downsampling that degrade speech quality considerably. In this paper we introduce a neural network architecture, based on a modification of the DiffWave model, that aims to restore the original speech signal. DiffWave, a recently published diffusionbased vocoder, has shown stateoftheart synthesized speech quality and relatively shorter waveform generation times, with only a small set of parameters. We replace the melspectrum upsampler in DiffWave with a deep CNN upsampler, which is trained to alter the degraded speech melspectrum to match that of the original speech. The model is trained using the original speech waveform, but conditioned on the degraded speech melspectrum. Posttraining, only the degraded melspectrum is used as input and the model generates an estimate of the original speech. Our model results in improved speech quality original DiffWave model as baseline on several different experiments. These include improving the quality of speech degraded by LPC10 compression, AMRNB compression, and signal clipping. Compared to the original DiffWave architecture, our scheme achieves better performance on several objective perceptual metrics and in subjective comparisons. Improvements over baseline are further amplified in a outofcorpus evaluation setting.
Zeroinflated generalized extreme value regression model for binary data and application in health study ; Logistic regression model is widely used in many studies to investigate the relationship between a binary response variable Y and a set of potential predictors mathbf X. The binary response may represent, for example, the occurrence of some outcome of interest Y1 if the outcome occurred and Y0 otherwise. When the dependent variable Y represents a rare event, the logistic regression model shows relevant drawbacks. In order to overcome these drawbacks we propose the Generalized Extreme Value GEV regression model. In particular, we suggest the quantile function of the GEV distribution as link function, so our attention is focused on the tail of the response curve for values close to one. A sample of observations is said to contain a cure fraction when a proportion of the study subjects the socalled cured individuals, as opposed to the susceptibles cannot experience the outcome of interest. One problem arising then is that it is usually unknown who are the cured and the susceptible subjects, unless the outcome of interest has been observed. In these settings, a logistic regression analysis of the relationship between mathbf X and Y among the susceptibles is no more straightforward. We develop a maximum likelihood estimation procedure for this problem, based on the joint modeling of the binary response of interest and the cure status. We investigate the identifiability of the resulting model. Then, we conduct a simulation study to investigate its finitesample behavior, and application to real data.
Modeling the charging process of a coil by an HTS dynamotype flux pump ; The highTc superconducting HTS dynamo exploits the nonlinear resistivity of an HTS tape to generate a DC voltage when subjected to a varying magnetic field. This leads to the socalled flux pumping phenomenon and enables the injection of DC current into a superconducting coil connected to the dynamo without current leads. In this work, the process of charging a coil by an HTS dynamo is examined in detail using two numerical models the Minimum Electromagnetic Entropy Production and the segregated textbfHformulation finite element model. The numerical results are compared with an analytical method for various airgaps and frequencies. Firstly, the IV curves of the modeled HTS dynamo are calculated to obtain the opencircuit voltage, shortcircuit current and internal resistance. Afterward, the process of charging a coil by the dynamo including the charging current curve and its dynamic behavior are investigated. The results obtained by the two models show excellent quantitative and qualitative agreement with each other and with the analytical method. Although the general charging process of the coil can be obtained from the IV curve of the flux pump, the current ripples within a cycle of dynamo rotation, which can cause ripple AC loss in the HTS dynamo, can only be captured via the presented models.
Discovery of Nonlinear Dynamical Systems using a RungeKutta Inspired Dictionarybased Sparse Regression Approach ; Discovering dynamical models to describe underlying dynamical behavior is essential to draw decisive conclusions and engineering studies, e.g., optimizing a process. Experimental data availability notwithstanding has increased significantly, but interpretable and explainable models in science and engineering yet remain incomprehensible. In this work, we blend machine learning and dictionarybased learning with numerical analysis tools to discover governing differential equations from noisy and sparselysampled measurement data. We utilize the fact that given a dictionary containing huge candidate nonlinear functions, dynamical models can often be described by a few appropriately chosen candidates. As a result, we obtain interpretable and parsimonious models which are prone to generalize better beyond the sampling regime. Additionally, we integrate a numerical integration framework with dictionary learning that yields differential equations without requiring or approximating derivative information at any stage. Hence, it is utterly effective in corrupted and sparselysampled data. We discuss its extension to governing equations, containing rational nonlinearities that typically appear in biological networks. Moreover, we generalized the method to governing equations that are subject to parameter variations and externally controlled inputs. We demonstrate the efficiency of the method to discover a number of diverse differential equations using noisy measurements, including a model describing neural dynamics, chaotic Lorenz model, MichaelisMenten Kinetics, and a parameterized Hopf normal form.
Gaia EDR3 parallaxes of type I Xray bursters and their implications on the models of type I Xray bursts a generic approach to the Gaia parallax zeropoint and its uncertainty ; Light curves of photospheric radius expansion PRE bursts, a subset of type I Xray bursts, have been used as standard candles to estimate the nominal PRE distances for 63 of PRE bursters bursters, assuming PRE burst emission is spherically symmetric. Modelindependent geometric parallaxes of bursters provide a valuable chance to test models of PRE bursts PRE models, and can be provided in some cases by Gaia astrometry of the donor stars in bursters. We searched for counterparts to 115 known bursters in the Gaia Early Data Release 3, and confirmed 4 bursters with Gaia counterparts that have detected 3sigma, prior to zeropoint correction parallaxes. We describe a generic approach to the Gaia parallax zero point as well as its uncertainty using an ensemble of Gaia quasars individually determined for each target. Assuming the spherically symmetric PRE model is correct, we refined the resultant nominal PRE distances of three bursters i.e. Cen X4, Cyg X2 and 4U 091954, and put constraints on their compositions of the nuclear fuel powering the bursts. Finally, we describe a method for testing the correctness of the spherically symmetric PRE model using parallax measurements, and provide preliminary results.
Thick branes in the scalartensor representation of fR,T gravity ; Braneworld scenarios consider our observable universe as a brane embedded in a fivedimensional bulk. In this work, we consider thick braneworld systems in the recently proposed dynamically equivalent scalartensor representation of fR,T gravity, where R is the Ricci scalar and T the trace of the stressenergy tensor. In the general fleftR,Tright case we consider two different models a brane model without matter fields where the geometry is supported solely by the gravitational fields, and a second model where matter is described by a scalar field with a potential. The particular cases for which the function fleftR,Tright is separable in the forms FleftRrightT and RGleftTright, which give rise to scalartensor representations with a single auxiliary scalar field, are studied separately. The stability of the gravitational sector is investigated and the models are shown to be stable against small perturbations of the metric. Furthermore, we show that in the fleftR,Tright model in the presence of an extra matter field, the shape of the graviton zeromode develops internal structure under appropriate choices of the parameters of the model.
Existence of weak solutions to multiphase CahnHilliardDarcy and CahnHilliardBrinkman models for stratified tumor growth with chemotaxis and general source terms ; We investigate a multiphase CahnHilliard model for tumor growth with general source terms. The multiphase approach allows us to consider multiple cell types and multiple chemical species oxygen andor nutrients that are consumed by the tumor. Compared to classical twophase tumor growth models, the multiphase model can be used to describe a stratified tumor exhibiting several layers of tissue e.g., proliferating, quiescent and necrotic tissue more precisely. Our model consists of a convective CahnHilliard type equation to describe the tumor evolution, a velocity equation for the associated volumeaveraged velocity field, and a convective reactiondiffusion type equation to describe the density of the chemical species. The velocity equation is either represented by Darcy's law or by the Brinkman equation. We first construct a global weak solution of the multiphase CahnHilliardBrinkman model. After that, we show that such weak solutions of the system converge to a weak solution of the multiphase CahnHilliardDarcy system as the viscosities tend to zero in some suitable sense. This means that the existence of a global weak solution to the CahnHilliardDarcy system is also established.
Lightweight CrossLingual Sentence Representation Learning ; Largescale models for learning fixeddimensional crosslingual sentence representations like LASER Artetxe and Schwenk, 2019b lead to significant improvement in performance on downstream tasks. However, further increases and modifications based on such largescale models are usually impractical due to memory limitations. In this work, we introduce a lightweight dualtransformer architecture with just 2 layers for generating memoryefficient crosslingual sentence representations. We explore different training tasks and observe that current crosslingual training tasks leave a lot to be desired for this shallow architecture. To ameliorate this, we propose a novel crosslingual language model, which combines the existing singleword masked language model with the newly proposed crosslingual tokenlevel reconstruction task. We further augment the training task by the introduction of two computationallylite sentencelevel contrastive learning tasks to enhance the alignment of crosslingual sentence representation space, which compensates for the learning bottleneck of the lightweight transformer for generative tasks. Our comparisons with competing models on crosslingual sentence retrieval and multilingual document classification confirm the effectiveness of the newly proposed training tasks for a shallow model.
A synthetic data integration framework to leverage external summarylevel information from heterogeneous populations ; There is a growing need for flexible general frameworks that integrate individuallevel data with external summary information for improved statistical inference. External information relevant for a risk prediction model may come in multiple forms, through regression coefficient estimates or predicted values of the outcome variable. Different external models may use different sets of predictors and the algorithm they used to predict the outcome Y given these predictors may or may not be known. The underlying populations corresponding to each external model may be different from each other and from the internal study population. Motivated by a prostate cancer risk prediction problem where novel biomarkers are measured only in the internal study, this paper proposes an imputationbased methodology where the goal is to fit a target regression model with all available predictors in the internal study while utilizing summary information from external models that may have used only a subset of the predictors. The method allows for heterogeneity of covariate effects across the external populations. The proposed approach generates synthetic outcome data in each external population, uses stacked multiple imputation technique to create a long dataset with complete covariate information. The final analysis of the stacked imputed data is conducted by weighted regression. This flexible and unified approach can improve statistical efficiency of the estimated coefficients in the internal study, improve predictions by utilizing even partial information available from models that use a subset of the full set of covariates used in the internal study, and provide statistical inference for the external population with potentially different covariate effects from the internal population.
SynthASR Unlocking Synthetic Data for Speech Recognition ; Endtoend E2E automatic speech recognition ASR models have recently demonstrated superior performance over the traditional hybrid ASR models. Training an E2E ASR model requires a large amount of data which is not only expensive but may also raise dependency on production data. At the same time, synthetic speech generated by the stateoftheart texttospeech TTS engines has advanced to nearhuman naturalness. In this work, we propose to utilize synthetic speech for ASR training SynthASR in applications where data is sparse or hard to get for ASR model training. In addition, we apply continual learning with a novel multistage training strategy to address catastrophic forgetting, achieved by a mix of weighted multistyle training, data augmentation, encoder freezing, and parameter regularization. In our experiments conducted on inhouse datasets for a new application of recognizing medication names, training ASR RNNT models with synthetic audio via the proposed multistage training improved the recognition performance on new application by more than 65 relative, without degradation on existing general applications. Our observations show that SynthASR holds great promise in training the stateoftheart largescale E2E ASR models for new applications while reducing the costs and dependency on production data.
EMOVIE A Mandarin Emotion Speech Dataset with a Simple Emotional TexttoSpeech Model ; Recently, there has been an increasing interest in neural speech synthesis. While the deep neural network achieves the stateoftheart result in texttospeech TTS tasks, how to generate a more emotional and more expressive speech is becoming a new challenge to researchers due to the scarcity of highquality emotion speech dataset and the lack of advanced emotional TTS model. In this paper, we first briefly introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion humanlabeled annotation. After that, we propose a simple but efficient architecture for emotional speech synthesis called EMSpeech. Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding. In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations. Finally, by showing a comparable performance in the emotional speech synthesis task, we successfully demonstrate the ability of the proposed model.
Generic Models for DiskResolved and DiskIntegrated Phase Dependent Linear Polarization of Light Reflected from Exoplanets ; Similar to the case of solar system planets, reflected starlight from exoplanets is expected to be polarized due to atmospheric scattering and the net disk integrated polarization should be nonzero owing to the asymmetrical illumination of the planetary disk. The computation of the diskintegrated reflected flux and its state of polarization involves techniques for the calculation of the local reflection matrices as well as the numerical recipes for integration over the planetary disks. In this paper, we present a novel approach to calculate the azimuthdependent reflected intensity vectors at each location on the planetary disk divided into grids. We achieve this by solving the vector radiative transfer equations that describe linear polarization. Our calculations incorporate selfconsistent atmospheric models of exoplanets over a wide range of equilibrium temperature, surface gravity, atmospheric composition, and cloud structure. A comparison of the flux and the amount of polarization calculated by considering both single and multiple scattering exhibits the effect of depolarization due to multiple scattering of light depending on the scattering albedo of the atmosphere. We have benchmarked our basic calculations against some of the existing models. We have also presented our models for the hot Jupiter HD 189733 b, indicating the level of precision required by future observations to detect the polarization of this planet in the optical and nearinfrared wavelength region. The generic nature and the accuracy offered by our models make them an effective tool for modeling the future observations of the polarized light reflected from exoplanets.
Unsupervised Speech Enhancement using Dynamical Variational AutoEncoders ; Dynamical variational autoencoders DVAEs are a class of deep generative models with latent variables, dedicated to model time series of highdimensional data. DVAEs can be considered as extensions of the variational autoencoder VAE that include temporal dependencies between successive observed andor latent vectors. Previous work has shown the interest of using DVAEs over the VAE for speech spectrograms modeling. Independently, the VAE has been successfully applied to speech enhancement in noise, in an unsupervised noiseagnostic setup that requires neither noise samples nor noisy speech samples at training time, but only requires clean speech signals. In this paper, we extend these works to DVAEbased singlechannel unsupervised speech enhancement, hence exploiting both speech signals unsupervised representation learning and dynamics modeling. We propose an unsupervised speech enhancement algorithm that combines a DVAE speech prior pretrained on clean speech signals with a noise model based on nonnegative matrix factorization, and we derive a variational expectationmaximization VEM algorithm to perform speech enhancement. The algorithm is presented with the most general DVAE formulation and is then applied with three specific DVAE models to illustrate the versatility of the framework. Experimental results show that the proposed DVAEbased approach outperforms its VAEbased counterpart, as well as several supervised and unsupervised noisedependent baselines, especially when the noise type is unseen during training.
Polarization in Geometric Opinion Dynamics ; In light of increasing recent attention to political polarization, understanding how polarization can arise poses an important theoretical question. While more classical models of opinion dynamics seem poorly equipped to study this phenomenon, a recent novel approach by Hkazla, Jin, Mossel, and Ramnarayan HJMR proposes a simple geometric model of opinion evolution that provably exhibits strong polarization in specialized cases. Moreover, polarization arises quite organically in their model in each time step, each agent updates opinions according to their correlationresponse with an issue drawn at random. However, their techniques do not seem to extend beyond a set of special cases they identify, which benefit from fragile symmetry or contractiveness assumptions, leaving open how general this phenomenon really is. In this paper, we further the study of polarization in related geometric models. We show that the exact form of polarization in such models is quite nuanced even when strong polarization does not hold, it is possible for weaker notions of polarization to nonetheless attain. We provide a concrete example where weak polarization holds, but strong polarization provably fails. However, we show that strong polarization provably holds in many variants of the HJMR model, which are also robust to a wider array of distributions of random issues this indicates that the form of polarization introduced by HJMR is more universal than suggested by their special cases. We also show that the weaker notions connect more readily to the theory of Markov chains on general state spaces.
Charformer Fast Character Transformers via Gradientbased Subword Tokenization ; Stateoftheart models in natural language processing rely on separate rigid subword tokenization algorithms, which limit their generalization ability and adaptation to new settings. In this paper, we propose a new model inductive bias that learns a subword tokenization endtoend as part of the model. To this end, we introduce a soft gradientbased subword tokenization module GBST that automatically learns latent subword representations from characters in a datadriven fashion. Concretely, GBST enumerates candidate subword blocks and learns to score them in a positionwise fashion using a block scoring network. We additionally introduce Charformer, a deep Transformer model that integrates GBST and operates on the byte level. Via extensive experiments on English GLUE, multilingual, and noisy text datasets, we show that Charformer outperforms a series of competitive bytelevel baselines while generally performing on par and sometimes outperforming subwordbased models. Additionally, Charformer is fast, improving the speed of both vanilla bytelevel and subwordlevel Transformers by 28100 while maintaining competitive quality. We believe this work paves the way for highly performant tokenfree models that are trained completely endtoend.
Causal Reinforcement Learning using Observational and Interventional Data ; Learning efficiently a causal model of the environment is a key challenge of modelbased RL agents operating in POMDPs. We consider here a scenario where the learning agent has the ability to collect online experiences through direct interactions with the environment interventional data, but has also access to a large collection of offline experiences, obtained by observing another agent interacting with the environment observational data. A key ingredient, that makes this situation nontrivial, is that we allow the observed agent to interact with the environment based on hidden information, which is not observed by the learning agent. We then ask the following questions can the online and offline experiences be safely combined for learning a causal model And can we expect the offline experiences to improve the agent's performances To answer these questions, we import ideas from the wellestablished causal framework of docalculus, and we express modelbased reinforcement learning as a causal inference problem. Then, we propose a general yet simple methodology for leveraging offline data during learning. In a nutshell, the method relies on learning a latentbased causal transition model that explains both the interventional and observational regimes, and then using the recovered latent variable to infer the standard POMDP transition model via deconfounding. We prove our method is correct and efficient in the sense that it attains better generalization guarantees due to the offline data in the asymptotic case, and we illustrate its effectiveness empirically on synthetic toy problems. Our contribution aims at bridging the gap between the fields of reinforcement learning and causality.
Roof Damage Assessment from Automated 3D Building Models ; The 3D building modelling is important in urban planning and related domains that draw upon the content of 3D models of urban scenes. Such 3D models can be used to visualize city images at multiple scales from individual buildings to entire cities prior to and after a change has occurred. This ability is of great importance in daytoday work and special projects undertaken by planners, geodesigners, and architects. In this research, we implemented a novel approach to 3D building models for such matter, which included the integration of geographic information systems GIS and 3D Computer Graphics 3DCG components that generate 3D house models from building footprints polygons, and the automated generation of simple and complex roof geometries for rapid roof area damage reporting. These polygons footprints are usually orthogonal. A complicated orthogonal polygon can be partitioned into a set of rectangles. The proposed GIS and 3DCG integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and boxshaped building bodies on these rectangles. Since technicians are drawing these polygons manually with digitizers, depending on aerial photos, not all building polygons are precisely orthogonal. But, when placing a set of boxes as building bodies for creating the buildings, there may be gaps or overlaps between these boxes if building polygons are not precisely orthogonal. In our proposal, after approximately orthogonal building polygons are partitioned and rectified into a set of mutually orthogonal rectangles, each rectangle knows which rectangle is adjacent to and which edge of the rectangle is adjacent to, which will avoid unwanted intersection of windows and doors when building bodies combined.
Style Curriculum Learning for Robust Medical Image Segmentation ; The performance of deep segmentation models often degrades due to distribution shifts in image intensities between the training and test data sets. This is particularly pronounced in multicentre studies involving data acquired using multivendor scanners, with variations in acquisition protocols. It is challenging to address this degradation because the shift is often not known textita priori and hence difficult to model. We propose a novel framework to ensure robust segmentation in the presence of such distribution shifts. Our contribution is threefold. First, inspired by the spirit of curriculum learning, we design a novel style curriculum to train the segmentation models using an easytohard mode. A style transfer model with style fusion is employed to generate the curriculum samples. Gradually focusing on complex and adversarial style samples can significantly boost the robustness of the models. Second, instead of subjectively defining the curriculum complexity, we adopt an automated gradient manipulation method to control the hard and adversarial sample generation process. Third, we propose the Local Gradient Sign strategy to aggregate the gradient locally and stabilise training during gradient manipulation. The proposed framework can generalise to unknown distribution without using any target data. Extensive experiments on the public MMs Challenge dataset demonstrate that our proposed framework can generalise deep models well to unknown distributions and achieve significant improvements in segmentation accuracy.
Parameter uncertainty quantification in an idealized GCM with a seasonal cycle ; Climate models are generally calibrated manually by comparing selected climate statistics, such as the global topofatmosphere energy balance, to observations. The manual tuning only targets a limited subset of observational data and parameters. Bayesian calibration can estimate climate model parameters and their uncertainty using a larger fraction of the available data and automatically exploring the parameter space more broadly. In Bayesian learning, it is natural to exploit the seasonal cycle, which has large amplitude, compared with anthropogenic climate change, in many climate statistics. In this study, we develop methods for the calibration and uncertainty quantification UQ of model parameters exploiting the seasonal cycle, and we demonstrate a proofofconcept with an idealized general circulation model GCM. Uncertainty quantification is performed using the calibrateemulatesample approach, which combines stochastic optimization and machine learning emulation to speed up Bayesian learning. The methods are demonstrated in a perfectmodel setting through the calibration and UQ of a convective parameterization in an idealized GCM with a seasonal cycle. Calibration and UQ based on seasonally averaged climate statistics, compared to annually averaged, reduces the calibration error by up to an order of magnitude and narrows the spread of posterior distributions by factors between two and five, depending on the variables used for UQ. The reduction in the size of the parameter posterior distributions leads to a reduction in the uncertainty of climate model predictions.
An Empirical Study on the Usage of Transformer Models for Code Completion ; Code completion aims at speeding up code writing by predicting the next code tokens the developer is likely to write. Works in this field focused on improving the accuracy of the generated predictions, with substantial leaps forward made possible by deep learning DL models. However, code completion techniques are mostly evaluated in the scenario of predicting the next token to type, with few exceptions pushing the boundaries to the prediction of an entire code statement. Thus, little is known about the performance of stateoftheart code completion approaches in more challenging scenarios in which, for example, an entire code block must be generated. We present a largescale study exploring the capabilities of stateoftheart Transformerbased models in supporting code completion at different granularity levels, including single tokens, one or multiple entire statements, up to entire code blocks e.g., the iterated block of a for loop. We experimented with several variants of two recently proposed Transformerbased models, namely RoBERTa and the TextToText Transfer Transformer T5, for the task of code completion. The achieved results show that Transformerbased models, and in particular the T5, represent a viable solution for code completion, with perfect predictions ranging from 29, obtained when asking the model to guess entire blocks, up to 69, reached in the simpler scenario of few tokens masked from the same code statement.
Deterministic Logarithmic Completeness in the Distributed Sleeping Model ; We provide a deterministic scheme for solving any decidable problem in the distributed sleeping model. The sleeping model is a generalization of the standard messagepassing model, with an additional capability of network nodes to enter a sleeping state occasionally. As long as a vertex is in the awake state, it is similar to the standard messagepassing setting. However, when a vertex is asleep it cannot receive or send messages in the network nor can it perform internal computations. On the other hand, sleeping rounds do not count towards awake complexity. Awake complexity is the main complexity measurement in this setting, which is the number of awake rounds a vertex spends during an execution. In this paper we devise algorithms with worstcase guarantees on the awake complexity. We devise a deterministic scheme with awake complexity of Olog n for solving any decidable problem in this model by constructing a structure we call Distributed Layered Tree. This structure turns out to be very powerful in the sleeping model, since it allows one to collect the entire graph information within a constant number of awake rounds. Moreover, we prove that our general technique cannot be improved in this model, by showing that the construction of distributed layered trees itself requires Omegalog n awake rounds. Another result we obtain in this work is a deterministic scheme for solving any problem from a class of problems, denoted OLOCAL, in Olog Delta logn awake rounds. This class contains various wellstudied problems, such as MIS and Delta1vertexcoloring.
Renormalization group improvement of the effective potential in a 11 dimensional GrossNeveu model ; In this work, we investigate the consequences of the Renormalization Group Equation RGE in the determination of the effective potential and the study of Dynamical Symmetry Breaking DSB in an GrossNeveu GN model with N fermions fields in 11 dimensional spacetime, which can be applied as a model to describe certain properties of the polyacetylene. The classical Lagrangian of the model is scale invariant, but radiative corrections to the effective potential can lead to dimensional transmutation, when a dimensionless parameter coupling constant of the classical Lagrangian is exchanged for a dimensionful one, a dynamically generated mass for the fermion fields. For the model we are considering, perturbative calculations of the effective potential and renormalization group functions up to three loops are available, but we use the RGE and the leading logs approximation to calculate an improved effective potential, including contributions up to six loops orders. We then perform a systematic study of the general aspects of DSB in the GN model with finite N, comparing the results we obtain with the ones derived from the original unimproved effective potential we started with.
A Contract Theory based Incentive Mechanism for Federated Learning ; Federated learning FL serves as a data privacypreserved machine learning paradigm, and realizes the collaborative model trained by distributed clients. To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients. It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients. This paper aims to propose a contract theory based FL task training model towards minimizing incentive budget subject to clients being individually rational IR and incentive compatible IC in each FL training round. We design a twodimensional contract model by formally defining two private types of clients, namely data quality and computation effort. To effectively aggregate the trained models, a contractbased aggregator is proposed. We analyze the feasible and optimal contract solutions to the proposed contract model. Experimental results demonstrate that the proposed framework and contract model can effective improve the generation accuracy of FL tasks. Experimental results show that the generalization accuracy of the FL tasks can be improved by the proposed incentive mechanism where contractbased aggregation is applied.
Learning of Visual Relations The Devil is in the Tails ; Significant effort has been recently devoted to modeling visual relations. This has mostly addressed the design of architectures, typically by adding parameters and increasing model complexity. However, visual relation learning is a longtailed problem, due to the combinatorial nature of joint reasoning about groups of objects. Increasing model complexity is, in general, illsuited for longtailed problems due to their tendency to overfit. In this paper, we explore an alternative hypothesis, denoted the Devil is in the Tails. Under this hypothesis, better performance is achieved by keeping the model simple but improving its ability to cope with longtailed distributions. To test this hypothesis, we devise a new approach for training visual relationships models, which is inspired by stateoftheart longtailed recognition literature. This is based on an iterative decoupled training scheme, denoted Decoupled Training for Devil in the Tails DT2. DT2 employs a novel sampling approach, Alternating ClassBalanced Sampling ACBS, to capture the interplay between the longtailed entity and predicate distributions of visual relations. Results show that, with an extremely simple architecture, DT2ACBS significantly outperforms much more complex stateoftheart methods on scene graph generation tasks. This suggests that the development of sophisticated models must be considered in tandem with the longtailed nature of the problem.
SelfCalibrating Neural Radiance Fields ; In this work, we propose a camera selfcalibration algorithm for generic cameras with arbitrary nonlinear distortions. We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects. Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary nonlinear camera distortions. While traditional selfcalibration algorithms mostly rely on geometric constraints, we additionally incorporate photometric consistency. This requires learning the geometry of the scene, and we use Neural Radiance Fields NeRF. We also propose a new geometric loss function, viz., projected ray distance loss, to incorporate geometric consistency for complex nonlinear camera models. We validate our approach on standard real image datasets and demonstrate that our model can learn the camera intrinsics and extrinsics pose from scratch without COLMAP initialization. Also, we show that learning accurate camera models in a differentiable manner allows us to improve PSNR over baselines. Our module is an easytouse plugin that can be applied to NeRF variants to improve performance. The code and data are currently available at httpsgithub.comPOSTECHCVLabSCNeRF.
MultiAgent Inverse Reinforcement Learning Suboptimal Demonstrations and Alternative Solution Concepts ; Multiagent inverse reinforcement learning MIRL can be used to learn reward functions from agents in social environments. To model realistic social dynamics, MIRL methods must account for suboptimal human reasoning and behavior. Traditional formalisms of game theory provide computationally tractable behavioral models, but assume agents have unrealistic cognitive capabilities. This research identifies and compares mechanisms in MIRL methods which a handle noise, biases and heuristics in agent decision making and b model realistic equilibrium solution concepts. MIRL research is systematically reviewed to identify solutions for these challenges. The methods and results of these studies are analyzed and compared based on factors including performance accuracy, efficiency, and descriptive quality. We found that the primary methods for handling noise, biases and heuristics in MIRL were extensions of Maximum Entropy MaxEnt IRL to multiagent settings. We also found that many successful solution concepts are generalizations of the traditional Nash Equilibrium NE. These solutions include the correlated equilibrium, logistic stochastic best response equilibrium and entropy regularized mean field NE. Methods which use recursive reasoning or updating also perform well, including the feedback NE and archive multiagent adversarial IRL. Success in modeling specific biases and heuristics in singleagent IRL and promising results using a Theory of Mind approach in MIRL imply that modeling specific biases and heuristics may be useful. Flexibility and unbiased inference in the identified alternative solution concepts suggest that a solution concept which has both recursive and generalized characteristics may perform well at modeling realistic social interactions.
Frugal U1X models with nonminimal flavor violation for b to s ell ell anomalies and neutrino mixing ; We analyze the class of models with an extra U1X gauge symmetry that can account for the b to s ell ell anomalies by modifying the Wilson coefficients C9e and C9mu from their standard model values. At the same time, these models generate appropriate quark mixing, and give rise to neutrino mixing via the TypeI seesaw mechanism. Apart from the gauge boson Z', these frugal models only have three righthanded neutrinos for the seesaw mechanism, an additional SU2L scalar doublet for quark mixing, and a SMsinglet scalar that breaks the U1X symmetry. This setup identifies a class of leptonic symmetries, and necessitates nonzero but equal charges for the first two quark generations. If the quark mixing beyond the standard model were CKMlike, all these symmetries would be ruled out by the latest flavor constraints on Wilson coefficients and collider constraints on Z' parameters. However, we identify a singleparameter source of nonminimal flavor violation that allows a wider class of U1X symmetries to be compatible with all data. We show that the viable leptonic symmetries have to be of the form Le pm 3 Lmu Ltau or Le 3 Lmu Ltau, and determine the MZprime, gZprime parameter space that may be probed by the highluminosity data at the LHC.
Modelling spinup episodes in accreting millisecond Xray pulsars ; Accreting millisecond Xray pulsars are known to provide a wealth of physical information during their successive states of outburst and quiescence. Based on the observed spinup and spindown rates of these objects it is possible, among other things, to infer the stellar magnetic field strength and test models of accretion disc flow. In this paper we consider the three accreting Xray pulsars XTE J1751305, IGR J002915934, and SAX J1808.43658 with the best available timing data, and model their observed spinup rates with the help of a collection of standard torque models that describe a magneticallythreaded accretion disc truncated at the magnetospheric radius. Whilst none of these models are able to explain the observational data, we find that the inclusion of the physically motivated phenomenological parameter xi, which controls the uncertainty in the location of the magnetospheric radius, leads to an enhanced discintegrated accretion torque. These 'new' torque models are compatible with the observed spinup rates as well as the inferred magnetic fields of these objects provided that xi approx 0.10.5. Our results are supplemented with a discussion of the relevance of additional physics effects that include the presence of a multipolar magnetic field and generalrelativistic gravity.
General CrossArchitecture Distillation of Pretrained Language Models into Matrix Embeddings ; Large pretrained language models PreLMs are revolutionizing natural language processing across all benchmarks. However, their sheer size is prohibitive for small laboratories or for deployment on mobile devices. Approaches like pruning and distillation reduce the model size but typically retain the same model architecture. In contrast, we explore distilling PreLMs into a different, more efficient architecture, Continual Multiplication of Words CMOW, which embeds each word as a matrix and uses matrix multiplication to encode sequences. We extend the CMOW architecture and its CMOWCBOWHybrid variant with a bidirectional component for more expressive power, pertoken representations for a general taskagnostic distillation during pretraining, and a twosequence encoding scheme that facilitates downstream tasks on sentence pairs, such as sentence similarity and natural language inference. Our matrixbased bidirectional CMOWCBOWHybrid model is competitive to DistilBERT on question similarity and recognizing textual entailment, but uses only half of the number of parameters and is three times faster in terms of inference speed. We match or exceed the scores of ELMo for all tasks of the GLUE benchmark except for the sentiment analysis task SST2 and the linguistic acceptability task CoLA. However, compared to previous crossarchitecture distillation approaches, we demonstrate a doubling of the scores on detecting linguistic acceptability. This shows that matrixbased embeddings can be used to distill large PreLM into competitive models and motivates further research in this direction.
Confronting observations of VHE gammaray blazar flares with reconnection models ; Several models have been suggested to explain the fast gammaray variability observed in blazars, but its origin is still debated. One scenario is magnetic reconnection, a process that can efficiently convert magnetic energy to energy of relativistic particles accelerated in the reconnection layer. In our study, we compare results from stateoftheart particleincell simulations with observations of blazars at Very High Energy VHE, E 100 GeV gammarays. Our goal is to test our model predictions on fast gammaray variability with data and to constrain the parameter space of the model, such as the magnetic field strength of the unreconnected plasma and the reconnection layer orientation in the blazar jet. For this first comparison, we used the remarkably wellsampled VHE gammaray light curve of Mrk 421 observed with the MAGIC and VERITAS telescopes in 2013.The simulated VHE light curves were generated using the observable parameters of Mrk 421, such as the jet power, bulk Lorentz factor, and the jet viewing angle, and sampled as real data. Our results pave the way for future modeltodata comparison with nextgeneration Cherenkov telescopes, which will help further constrain the different variability models.
Bifurcation analysis of the predatorprey model with the Allee effect in the predator ; The use of predatorprey models in theoretical ecology has a long history, and the model equations have largely evolved since the original LotkaVolterra system towards more realistic descriptions of the processes of predation, reproduction and mortality. One important aspect is the recognition of the fact that the growth of a population can be subject to an Allee effect, where the per capita growth rate increases with the population density. Including an Allee effect has been shown to fundamentally change predatorprey dynamics and strongly impact species persistence, but previous studies mostly focused on scenarios of an Allee effect in the prey population. Here we explore a predatorprey model with an ecologically important case of the Allee effect in the predator population where it occurs in the numerical response of predator without affecting its functional response. Biologically, this can result from various scenarios such as a lack of mating partners, sperm limitation and cooperative breeding mechanisms, among others. Unlike previous studies, we consider here a generic mathematical formulation of the Allee effect without specifying a concrete parameterisation of the functional form, and analyse the possible local bifurcations in the system. Further, we explore the global bifurcation structure of the model and its possible dynamical regimes for three different concrete parameterisations of the Allee effect. The model possesses a complex bifurcation structure there can be multiple coexistence states including two stable limit cycles. Inclusion of the Allee effect in the predator generally has a destabilising effect on the coexistence equilibrium. We also show that regardless of the parametrisation of the Allee effect, enrichment of the environment will eventually result in extinction of the predator population.
MiRANews Dataset and Benchmarks for MultiResourceAssisted News Summarization ; One of the most challenging aspects of current singledocument news summarization is that the summary often contains 'extrinsic hallucinations', i.e., facts that are not present in the source document, which are often derived via world knowledge. This causes summarization systems to act more like openended language models tending to hallucinate facts that are erroneous. In this paper, we mitigate this problem with the help of multiple supplementary resource documents assisting the task. We present a new dataset MiRANews and benchmark existing summarization models. In contrast to multidocument summarization, which addresses multiple events from several source documents, we still aim at generating a summary for a single document. We show via data analysis that it's not only the models which are to blame more than 27 of facts mentioned in the gold summaries of MiRANews are better grounded on assisting documents than in the main source articles. An error analysis of generated summaries from pretrained models finetuned on MiRANews reveals that this has an even bigger effects on models assisted summarization reduces 55 of hallucinations when compared to singledocument summarization models trained on the main article only. Our code and data are available at httpsgithub.comXinnuoXuMiRANews.
On physicsinformed datadriven isotropic and anisotropic constitutive models through probabilistic machine learning and spacefilling sampling ; Datadriven constitutive modeling is an emerging field in computational solid mechanics with the prospect of significantly relieving the computational costs of hierarchical computational methods. Traditionally, these surrogates have been trained using datasets which map strain inputs to stress outputs directly. Datadriven constitutive models for elastic and inelastic materials have commonly been developed based on artificial neural networks ANNs, which recently enabled the incorporation of physical laws in the construction of these models. However, ANNs do not offer convergence guarantees and are reliant on userspecified parameters. In contrast to ANNs, Gaussian process regression GPR is based on nonparametric modeling principles as well as on fundamental statistical knowledge and hence allows for strict convergence guarantees. GPR however has the major disadvantage that it scales poorly as datasets get large. In this work we present a physicsinformed datadriven constitutive modeling approach for isostropic and anisotropic materials based on probabilistic machine learning that can be used in the big data context. The trained GPR surrogates are able to respect physical principles such as material frame indifference, material symmetry, thermodynamic consistency, stressfree undeformed configuration, and the local balance of angular momentum. Furthermore, this paper presents the first sampling approach that directly generates spacefilling points in the invariant space corresponding to bounded domain of the gradient deformation tensor. Overall, the presented approach is tested on synthetic data from isotropic and anisotropic constitutive laws and shows surprising accuracy even far beyond the limits of the training domain, indicating that the resulting surrogates can efficiently generalize as they incorporate knowledge about the underlying physics.
AraT5 TexttoText Transformers for Arabic Language Generation ; Transfer learning with a unified Transformer framework T5 that converts all language problems into a texttotext format was recently proposed as a simple and effective transfer learning approach. Although a multilingual version of the T5 model mT5 was also introduced, it is not clear how well it can fare on nonEnglish tasks involving diverse data. To investigate this question, we apply mT5 on a language with a wide variety of dialectsArabic. For evaluation, we introduce a novel benchmark for ARabic language GENeration ARGEN, covering seven important tasks. For model comparison, we pretrain three powerful Arabic T5style models and evaluate them on ARGEN. Although pretrained with 49 less data, our new models perform significantly better than mT5 on all ARGEN tasks in 52 out of 59 test sets and set several new SOTAs. Our models also establish new SOTA on the recentlyproposed, large Arabic language understanding evaluation benchmark ARLUE AbdulMageed et al., 2021. Our new models are publicly available. We also link to ARGEN datasets through our repository httpsgithub.comUBCNLParaT5.
The BlockCorrelated Pseudo Marginal Sampler for State Space Models ; Particle Marginal MetropolisHastings PMMH is a general approach to Bayesian inference when the likelihood is intractable, but can be estimated unbiasedly. Our article develops an efficient PMMH method that scales up better to higher dimensional state vectors than previous approaches. The improvement is achieved by the following innovations. First, the trimmed mean of the unbiased likelihood estimates of the multiple particle filters is used. Second, a novel block version of PMMH that works with multiple particle filters is proposed. Third, the article develops an efficient auxiliary disturbance particle filter, which is necessary when the bootstrap disturbance filter is inefficient, but the state transition density cannot be expressed in closed form. Fourth, a novel sorting algorithm, which is as effective as previous approaches but significantly faster than them, is developed to preserve the correlation between the logs of the likelihood estimates at the current and proposed parameter values. The performance of the sampler is investigated empirically by applying it to nonlinear Dynamic Stochastic General Equilibrium models with relatively high state dimensions and with intractable state transition densities and to multivariate stochastic volatility in the mean models. Although our focus is on applying the method to state space models, the approach will be useful in a wide range of applications such as large panel data models and stochastic differential equation models with mixed effects.
Towards Learning DisSimilarity of Source Code from Program Contrasts ; Understanding the functional dissimilarity of source code is significant for code modeling tasks such as software vulnerability and code clone detection. We present DISCODISsimilarity of COde, a novel selfsupervised model focusing on identifying dissimilar functionalities of source code. Different from existing works, our approach does not require a huge amount of randomly collected datasets. Rather, we design structureguided code transformation algorithms to generate synthetic code clones and inject realworld security bugs, augmenting the collected datasets in a targeted way. We propose to pretrain the Transformer model with such automatically generated program contrasts to better identify similar code in the wild and differentiate vulnerable programs from benign ones. To better capture the structural features of source code, we propose a new cloze objective to encode the local treebased context e.g., parents or sibling nodes. We pretrain our model with a much smaller dataset, the size of which is only 5 of the stateoftheart models' training datasets, to illustrate the effectiveness of our data augmentation and the pretraining approach. The evaluation shows that, even with much less data, DISCO can still outperform the stateoftheart models in vulnerability and code clone detection tasks.
Selfexplaining Neural Network with Conceptbased Explanations for ICU Mortality Prediction ; Complex deep learning models show high prediction tasks in various clinical prediction tasks but their inherent complexity makes it more challenging to explain model predictions for clinicians and healthcare providers. Existing research on explainability of deep learning models in healthcare have two major limitations using posthoc explanations and using raw clinical variables as units of explanation, both of which are often difficult for human interpretation. In this work, we designed a selfexplaining deep learning framework using the expertknowledge driven clinical concepts or intermediate features as units of explanation. The selfexplaining nature of our proposed model comes from generating both explanations and predictions within the same architectural framework via joint training. We tested our proposed approach on a publicly available Electronic Health Records EHR dataset for predicting patient mortality in the ICU. In order to analyze the performanceinterpretability tradeoff, we compared our proposed model with a baseline having the same setup but without the explanation components. Experimental results suggest that adding explainability components to a deep learning framework does not impact prediction performance and the explanations generated by the model can provide insights to the clinicians to understand the possible reasons behind patient mortality.
Vectorquantized Image Modeling with Improved VQGAN ; Pretraining language models with nexttoken prediction on massive text corpora has delivered phenomenal zeroshot, fewshot, transfer learning and multitasking capabilities on both generative and discriminative language tasks. Motivated by this success, we explore a Vectorquantized Image Modeling VIM approach that involves pretraining a Transformer to predict rasterized image tokens autoregressively. The discrete image tokens are encoded from a learned VisionTransformerbased VQGAN ViTVQGAN. We first propose multiple improvements over vanilla VQGAN from architecture to codebook learning, yielding better efficiency and reconstruction fidelity. The improved ViTVQGAN further improves vectorquantized image modeling tasks, including unconditional, classconditioned image generation and unsupervised representation learning. When trained on ImageNet at 256times256 resolution, we achieve Inception Score IS of 175.1 and Fr'echet Inception Distance FID of 4.17, a dramatic improvement over the vanilla VQGAN, which obtains 70.6 and 17.04 for IS and FID, respectively. Based on ViTVQGAN and unsupervised pretraining, we further evaluate the pretrained Transformer by averaging intermediate features, similar to Image GPT iGPT. This ImageNetpretrained VIML significantly beats iGPTL on linearprobe accuracy from 60.3 to 73.2 for a similar model size. VIML also outperforms iGPTXL which is trained with extra web image data and larger model size.
SuperShaper TaskAgnostic Super Pretraining of BERT Models with Variable Hidden Dimensions ; Taskagnostic pretraining followed by taskspecific finetuning is a default approach to train NLU models. Such models need to be deployed on devices across the cloud and the edge with varying resource and accuracy constraints. For a given task, repeating pretraining and finetuning across tens of devices is prohibitively expensive. We propose SuperShaper, a task agnostic pretraining approach which simultaneously pretrains a large number of Transformer models by varying shapes, i.e., by varying the hidden dimensions across layers. This is enabled by a backbone network with linear bottleneck matrices around each Transformer layer which are sliced to generate differently shaped subnetworks. In spite of its simple design space and efficient implementation, SuperShaper discovers networks that effectively tradeoff accuracy and model size Discovered networks are more accurate than a range of handcrafted and automatically searched networks on GLUE benchmarks. Further, we find two critical advantages of shape as a design variable for Neural Architecture Search NAS a heuristics of good shapes can be derived and networks found with these heuristics match and even improve on carefully searched networks across a range of parameter counts, and b the latency of networks across multiple CPUs and GPUs are insensitive to the shape and thus enable deviceagnostic search. In summary, SuperShaper radically simplifies NAS for language models and discovers networks that generalize across tasks, parameter constraints, and devices.
Towards Streaming Egocentric Action Anticipation ; Egocentric action anticipation is the task of predicting the future actions a camera wearer will likely perform based on past video observations. While in a realworld system it is fundamental to output such predictions before the action begins, past works have not generally paid attention to model runtime during evaluation. Indeed, current evaluation schemes assume that predictions can be made offline, and hence that computational resources are not limited. In contrast, in this paper, we propose a streaming egocentric action anticipation evaluation protocol which explicitly considers model runtime for performance assessment, assuming that predictions will be available only after the current video segment is processed, which depends on the processing time of a method. Following the proposed evaluation scheme, we benchmark different stateoftheart approaches for egocentric action anticipation on two popular datasets. Our analysis shows that models with a smaller runtime tend to outperform heavier models in the considered streaming scenario, thus changing the rankings generally observed in standard offline evaluations. Based on this observation, we propose a lightweight action anticipation model consisting in a simple feedforward 3D CNN, which we propose to optimize using knowledge distillation techniques and a custom loss. The results show that the proposed approach outperforms prior art in the streaming scenario, also in combination with other lightweight models.
On the Security Risks of AutoML ; Neural Architecture Search NAS represents an emerging machine learning ML paradigm that automatically searches for models tailored to given tasks, which greatly simplifies the development of ML systems and propels the trend of ML democratization. Yet, little is known about the potential security risks incurred by NAS, which is concerning given the increasing use of NASgenerated models in critical domains. This work represents a solid initial step towards bridging the gap. Through an extensive empirical study of 10 popular NAS methods, we show that compared with their manually designed counterparts, NASgenerated models tend to suffer greater vulnerability to various malicious attacks e.g., adversarial evasion, model poisoning, and functionality stealing. Further, with both empirical and analytical evidence, we provide possible explanations for such phenomena given the prohibitive search space and training cost, most NAS methods favor models that converge fast at early training stages; this preference results in architectural properties associated with attack vulnerability e.g., high loss smoothness and low gradient variance. Our findings not only reveal the relationships between model characteristics and attack vulnerability but also suggest the inherent connections underlying different attacks. Finally, we discuss potential remedies to mitigate such drawbacks, including increasing cell depth and suppressing skip connects, which lead to several promising research directions.
On Model Selection Consistency of Lasso for HighDimensional Ising Models ; We theoretically analyze the model selection consistency of least absolute shrinkage and selection operator Lasso, both with and without postthresholding, for highdimensional Ising models. For random regular RR graphs of size p with regular node degree d and uniform couplings theta0, it is rigorously proved that Lasso textitwithout postthresholding is model selection consistent in the whole paramagnetic phase with the same order of sample complexity nOmegad3logp as that of ell1regularized logistic regression ell1LogR. This result is consistent with the conjecture in Meng, Obuchi, and Kabashima 2021 using the nonrigorous replica method from statistical physics and thus complements it with a rigorous proof. For general treelike graphs, it is demonstrated that the same result as RR graphs can be obtained under mild assumptions of the dependency condition and incoherence condition. Moreover, we provide a rigorous proof of the model selection consistency of Lasso with postthresholding for general treelike graphs in the paramagnetic phase without further assumptions on the dependency and incoherence conditions. Experimental results agree well with our theoretical analysis.
Schrodinger's Tree On Syntax and Neural Language Models ; In the last halfdecade, the field of natural language processing NLP has undergone two major transitions the switch to neural networks as the primary modeling paradigm and the homogenization of the training regime pretrain, then finetune. Amidst this process, language models have emerged as NLP's workhorse, displaying increasingly fluent generation capabilities and proving to be an indispensable means of knowledge transfer downstream. Due to the otherwise opaque, blackbox nature of such models, researchers have employed aspects of linguistic theory in order to characterize their behavior. Questions central to syntax the study of the hierarchical structure of language have factored heavily into such work, shedding invaluable insights about models' inherent biases and their ability to make humanlike generalizations. In this paper, we attempt to take stock of this growing body of literature. In doing so, we observe a lack of clarity across numerous dimensions, which influences the hypotheses that researchers form, as well as the conclusions they draw from their findings. To remedy this, we urge researchers make careful considerations when investigating coding properties, selecting representations, and evaluating via downstream tasks. Furthermore, we outline the implications of the different types of research questions exhibited in studies on syntax, as well as the inherent pitfalls of aggregate metrics. Ultimately, we hope that our discussion adds nuance to the prospect of studying language models and paves the way for a less monolithic perspective on syntax in this context.
Does Data Repair Lead to Fair Models Curating Contextually Fair Data To Reduce Model Bias ; Contextual information is a valuable cue for Deep Neural Networks DNNs to learn better representations and improve accuracy. However, cooccurrence bias in the training dataset may hamper a DNN model's generalizability to unseen scenarios in the real world. For example, in COCO, many object categories have a much higher cooccurrence with men compared to women, which can bias a DNN's prediction in favor of men. Recent works have focused on taskspecific training strategies to handle bias in such scenarios, but fixing the available data is often ignored. In this paper, we propose a novel and more generic solution to address the contextual bias in the datasets by selecting a subset of the samples, which is fair in terms of the cooccurrence with various classes for a protected attribute. We introduce a data repair algorithm using the coefficient of variation, which can curate fair and contextually balanced data for a protected classes. This helps in training a fair model irrespective of the task, architecture or training methodology. Our proposed solution is simple, effective, and can even be used in an active learning setting where the data labels are not present or being generated incrementally. We demonstrate the effectiveness of our algorithm for the task of object detection and multilabel image classification across different datasets. Through a series of experiments, we validate that curating contextually fair data helps make model predictions fair by balancing the true positive rate for the protected class across groups without compromising on the model's overall performance.
Cosmic Filament Spin from Dark Matter Vortices ; The recent observational evidence for cosmic filament spin on megaparsec scales Wang et al, Nature Astronomy 5, 839845 2021 demands an explanation in the physics of dark matter. Conventional collisionless cold particle dark matter is conjectured to generate cosmic filament spin through tidal torquing, but this explanation requires extrapolating from the quasilinear regime to the nonlinear regime. Meanwhile no alternative explanation exists in the context of ultralight e.g., axion dark matter, and indeed these models would naively predict zero spin for cosmic filaments. In this Letter we study cosmic filament spin in theories of ultralight dark matter, such as ultralight axions, and bosonic and fermionic condensates, such as superfluids and superconductors. These models are distinguished from conventional particle dark matter models by the possibility of dark matter vortices. We take a model agnostic approach, and demonstrate that a collection of dark vortices can explain the data reported in Wang et al. Modeling a collection of vortices with a simple twoparameter analytic model, corresponding to an averaging of the velocity field, we find an excellent fit to the data. We perform a Markov Chain Monte Carlo analysis and find constraints on the number of vortices, the dark matter mass, and the radius of the inner core region where the vortices are distributed, in order for ultralight dark matter to explain spinning cosmic filaments.
Heteroclinic cycling and extinction in MayLeonard models with demographic stochasticity ; May and Leonard SIAM J. Appl. Math 1975 introduced a threespecies LotkaVolterra type population model that exhibits heteroclinic cycling. Rather than producing a periodic limit cycle, the trajectory takes longer and longer to complete each cycle, passing closer and closer to unstable fixed points in which one population dominates and the others approach zero. Aperiodic heteroclinic dynamics have subsequently been studied in ecological systems sideblotched lizards; colicinogenic E. coli, in the immune system, in neural information processing models winnerless competition, and in models of neural central pattern generators. Yet as May and Leonard observed Biologically, the behavior produced by the model is nonsense. Once it is conceded that the variables represent animals, and therefore cannot fall below unity, it is clear that the system will, after a few cycles, converge on some single population, extinguishing the other two. Here, we explore different ways of introducing discrete stochastic dynamics based on May and Leonard's ODE model, with application to ecological population dynamics, and to a neuromotor central pattern generator system. We study examples of several quantitatively distinct asymptotic behaviors, including total extinction of all species, extinction to a single species, and persistent cyclic dominance with finite mean cycle length.
An E B Gyrokinetic Simulation Model for Kinetic Alfven Waves inTokamak Plasmas ; The gyrokinetic particle simulation serves as a powerful tool for the studies of transport, nonlinear phenomenon, and energetic particle physics in tokamak plasmas. While most gyrokinetic simulations make use of the scalar and vector potentials, a new model GKEB has been developed by using the E and B field in a general and comprehensive form and has been implemented in simulating kinetic Alfv'en waves in uniform plasma Chen et al, Science China Phys. Mechanics Astronomy 64 2021. In our work, the Chen et al. GKEB model has been expressed in general tokamak geometry explicitly using specific coordinates. Its reduction to the uniform plasma is verified and the numerical results show good agreement with the work by Chen et al. The theoretical dispersion relation and numerical results in the local screw pinch model are in excellent agreement. Numerical results show excellent performance in a realistic parameter regime of burning plasmas in terms of high values of beta Me kperp2 rhoi2, which is challenging for traditional methods due to the cancellation' problem. As one application, the GKEB model is implemented with kinetic electrons in the local limit. With the matched ITPATAE parameters, numerical results show the capability of the GKEB in treating the parallel electron Landau damping for realistic tokamak plasma parameters. As another application, the global GKEB model is implemented with the dominant electron contribution to E in the cold electron limit. Its capability in simulating the finite E due to the finite electron mass is demonstrated.
SelfSupervised Class Incremental Learning ; Existing Class Incremental Learning CIL methods are based on a supervised classification framework sensitive to data labels. When updating them based on the new class data, they suffer from catastrophic forgetting the model cannot discern old class data clearly from the new. In this paper, we explore the performance of SelfSupervised representation learning in Class Incremental Learning SSCIL for the first time, which discards data labels and the model's classifiers. To comprehensively discuss the difference in performance between supervised and selfsupervised methods in CIL, we set up three different class incremental schemes Random Class Scheme, Semantic Class Scheme, and Cluster Scheme, to simulate various class incremental learning scenarios. Besides, we propose Linear Evaluation Protocol LEP and Generalization Evaluation Protocol GEP to metric the model's representation classification ability and generalization in CIL. Our experiments on ImageNet100 and ImageNet show that SSCIL has better antiforgetting ability and robustness than supervised strategies in CIL. To understand what alleviates the catastrophic forgetting in SSCIL, we study the major components of SSCIL and conclude that 1 the composition of different data augmentation improves the quality of the model's representation and the textitGrayscale operation reduces the system noise of data augmentation in SSCIL. 2 the projector, like a buffer, reduces unnecessary parameter updates of the model in SSCIL and increases the robustness of the model. Although the performance of SSCIL is significantly higher than supervised methods in CIL, there is still an apparent gap with joint learning. Our exploration gives a baseline of selfsupervised class incremental learning on largescale datasets and contributes some forward strategies for mitigating the catastrophic forgetting in CIL.
Building GoalOriented Dialogue Systems with Situated Visual Context ; Most popular goaloriented dialogue agents are capable of understanding the conversational context. However, with the surge of virtual assistants with screen, the next generation of agents are required to also understand screen context in order to provide a proper interactive experience, and better understand users' goals. In this paper, we propose a novel multimodal conversational framework, where the dialogue agent's next action and their arguments are derived jointly conditioned both on the conversational and the visual context. Specifically, we propose a new model, that can reason over the visual context within a conversation and populate API arguments with visual entities given the user query. Our model can recognize visual features such as color and shape as well as the metadata based features such as price or star rating associated with a visual entity. In order to train our model, due to a lack of suitable multimodal conversational datasets, we also propose a novel multimodal dialog simulator to generate synthetic data and also collect realistic user data from MTurk to improve model robustness. The proposed model achieves a reasonable 85 model accuracy, without high inference latency. We also demonstrate the proposed approach in a prototypical furniture shopping experience for a multimodal virtual assistant.
Sharpnessaware Quantization for Deep Neural Networks ; Network quantization is a dominant paradigm of model compression. However, the abrupt changes in quantized weights during training often lead to severe loss fluctuations and result in a sharp loss landscape, making the gradients unstable and thus degrading the performance. Recently, SharpnessAware Minimization SAM has been proposed to smooth the loss landscape and improve the generalization performance of the models. Nevertheless, directly applying SAM to the quantized models can lead to perturbation mismatch or diminishment issues, resulting in suboptimal performance. In this paper, we propose a novel method, dubbed SharpnessAware Quantization SAQ, to explore the effect of SAM in model compression, particularly quantization for the first time. Specifically, we first provide a unified view of quantization and SAM by treating them as introducing quantization noises and adversarial perturbations to the model weights, respectively. According to whether the noise and perturbation terms depend on each other, SAQ can be formulated into three cases, which are analyzed and compared comprehensively. Furthermore, by introducing an efficient training strategy, SAQ only incurs a little additional training overhead compared with the default optimizer e.g., SGD or AdamW. Extensive experiments on both convolutional neural networks and Transformers across various datasets i.e., ImageNet, CIFAR10100, Oxford Flowers102, OxfordIIIT Pets show that SAQ improves the generalization performance of the quantized models, yielding the SOTA results in uniform quantization. For example, on ImageNet, SAQ outperforms AdamW by 1.2 on the Top1 accuracy for 4bit ViTB16. Our 4bit ResNet50 surpasses the previous SOTA method by 0.9 on the Top1 accuracy.
Modeling human intention inference in continuous 3D domains by inverse planning and body kinematics ; How to build AI that understands human intentions, and uses this knowledge to collaborate with people We describe a computational framework for evaluating models of goal inference in the domain of 3D motor actions, which receives as input the 3D coordinates of an agent's body, and of possible targets, to produce a continuously updated inference of the intended target. We evaluate our framework in three behavioural experiments using a novel Target Reaching Task, in which human observers infer intentions of actors reaching for targets among distracts. We describe Generative Body Kinematics model, which predicts human intention inference in this domain using Bayesian inverse planning and inverse body kinematics. We compare our model to three heuristics, which formalize the principle of least effort using simple assumptions about the actor's constraints, without the use of inverse planning. Despite being more computationally costly, the Generative Body Kinematics model outperforms the heuristics in certain scenarios, such as environments with obstacles, and at the beginning of reaching actions while the actor is relatively far from the intended target. The heuristics make increasingly accurate predictions during later stages of reaching actions, such as, when the intended target is close, and can be inferred by extrapolating the wrist trajectory. Our results identify contexts in which inverse body kinematics is useful for intention inference. We show that human observers indeed rely on inverse body kinematics in such scenarios, suggesting that modeling body kinematic can improve performance of inference algorithms.
Stability results assuming tameness, monster model and continuity of nonsplitting ; Assuming the existence of a monster model, tameness and continuity of nonsplitting in an abstract elementary class AEC, we extend known superstability results let muLSbf K be a regular stability cardinal and let chi be the local character of munonsplitting. The following holds 1. When munonforking is restricted to mu,geqchilimit models ordered by universal extensions, it enjoys invariance, monotonicity, uniqueness, existence, extension and continuity. It also has local character chi. This generalizes Vasey's result which assumed musuperstability to obtain same properties but with local character aleph0. 2. There is lambdainmu,hmu such that if bf K is stable in every cardinal between mu and lambda, then bf K has musymmetry while munonforking in 1 has symmetry. In this case a bf K has the uniqueness of mu,geqchilimit models if M1,M2 are both mu,geqchilimit over some M0in Kmu, then M1congM0M2; b any increasing chain of musaturated models of length geqchi has a musaturated union. These generalize VanDierenVasey's result and remove the symmetry assumption in BoneyVanDieren and Vasey's result. Under mutameness, the conclusions of 1, 2ab are equivalent to bf K having the chilocal character of munonsplitting. Grossberg and Vasey gave eventual superstability criteria for tame AECs with a monster model. We remove the high cardinal threshold and reduce the cardinal jump between equivalent superstability criteria. We also add two new superstability criteria to the list a weaker version of solvability and the boundedness of the Urank.
Probabilistic Tracking with Deep Factors ; In many applications of computer vision it is important to accurately estimate the trajectory of an object over time by fusing data from a number of sources, of which 2D and 3D imagery is only one. In this paper, we show how to use a deep feature encoding in conjunction with generative densities over the features in a factorgraph based, probabilistic tracking framework. We present a likelihood model that combines a learned feature encoder with generative densities over them, both trained in a supervised manner. We also experiment with directly inferring probability through the use of image classification models that feed into the likelihood formulation. These models are used to implement deep factors that are added to the factor graph to complement other factors that represent domainspecific knowledge such as motion models andor other prior information. Factors are then optimized together in a nonlinear leastsquares tracking framework that takes the form of an Extended Kalman Smoother with a Gaussian prior. A key feature of our likelihood model is that it leverages the Lie group properties of the tracked target's pose to apply the feature encoding on an image patch, extracted through a differentiable warp function inspired by spatial transformer networks. To illustrate the proposed approach we evaluate it on a challenging social insect behavior dataset, and show that using deep features does outperform these earlier linear appearance models used in this setting.
MoFaNeRF Morphable Facial Neural Radiance Field ; We propose a parametric model that maps freeview images into a vector space of coded facial shape, expression and appearance with a neural radiance field, namely Morphable Facial NeRF. Specifically, MoFaNeRF takes the coded facial shape, expression and appearance along with space coordinate and view direction as input to an MLP, and outputs the radiance of the space point for photorealistic image synthesis. Compared with conventional 3D morphable models 3DMM, MoFaNeRF shows superiority in directly synthesizing photorealistic facial details even for eyes, mouths, and beards. Also, continuous face morphing can be easily achieved by interpolating the input shape, expression and appearance codes. By introducing identityspecific modulation and texture encoder, our model synthesizes accurate photometric details and shows strong representation ability. Our model shows strong ability on multiple applications including imagebased fitting, random generation, face rigging, face editing, and novel view synthesis. Experiments show that our method achieves higher representation ability than previous parametric models, and achieves competitive performance in several applications. To the best of our knowledge, our work is the first facial parametric model built upon a neural radiance field that can be used in fitting, generation and manipulation. The code and data is available at httpsgithub.comzhuhaonjumofanerf.
Bayesian Estimation Approach for Linear Regression Models with Linear Inequality Restrictions ; Univariate and multivariate general linear regression models, subject to linear inequality constraints, arise in many scientific applications. The linear inequality restrictions on model parameters are often available from phenomenological knowledge and motivated by machine learning applications of highconsequence engineering systems Agrell, 2019; Veiga and Marrel, 2012. Some studies on the multiple linear models consider known linear combinations of the regression coefficient parameters restricted between upper and lower bounds. In the present paper, we consider both univariate and multivariate general linear models subjected to this kind of linear restrictions. So far, research on univariate cases based on Bayesian methods is all under the condition that the coefficient matrix of the linear restrictions is a square matrix of full rank. This condition is not, however, always feasible. Another difficulty arises at the estimation step by implementing the Gibbs algorithm, which exhibits, in most cases, slow convergence. This paper presents a Bayesian method to estimate the regression parameters when the matrix of the constraints providing the set of linear inequality restrictions undergoes no condition. For the multivariate case, our Bayesian method estimates the regression parameters when the number of the constrains is less than the number of the regression coefficients in each multiple linear models. We examine the efficiency of our Bayesian method through simulation studies for both univariate and multivariate regressions. After that, we illustrate that the convergence of our algorithm is relatively faster than the previous methods. Finally, we use our approach to analyze two real datasets.
AttentionBased Model and Deep Reinforcement Learning for Distribution of Event Processing Tasks ; Event processing is the cornerstone of the dynamic and responsive Internet of Things IoT. Recent approaches in this area are based on representational state transfer REST principles, which allow event processing tasks to be placed at any device that follows the same principles. However, the tasks should be properly distributed among edge devices to ensure fair resources utilization and guarantee seamless execution. This article investigates the use of deep learning to fairly distribute the tasks. An attentionbased neural network model is proposed to generate efficient load balancing solutions under different scenarios. The proposed model is based on the Transformer and Pointer Network architectures, and is trained by an advantage actorcritic reinforcement learning algorithm. The model is designed to scale to the number of event processing tasks and the number of edge devices, with no need for hyperparameters retuning or even retraining. Extensive experimental results show that the proposed model outperforms conventional heuristics in many key performance indicators. The generic design and the obtained results show that the proposed model can potentially be applied to several other load balancing problem variations, which makes the proposal an attractive option to be used in realworld scenarios due to its scalability and efficiency.
Interpretable Convolutional Neural Networks for SubjectIndependent Motor Imagery Classification ; Deep learning frameworks have become increasingly popular in brain computer interface BCI study thanks to their outstanding performance. However, in terms of the classification model alone, they are treated as black box as they do not provide any information on what led them to reach a particular decision. In other words, we cannot convince whether the high performance was aroused by the neurophysiological factors or simply noise. Because of this disadvantage, it is difficult to ensure adequate reliability compared to their high performance. In this study, we propose an explainable deep learning model for BCI. Specifically, we aim to classify EEG signal which is obtained from the motorimagery MI task. In addition, we adopted layerwise relevance propagation LRP to the model to interpret the reason that the model derived certain classification output. We visualized the heatmap which indicates the output of the LRP in form of topography to certify neurophysiological factors. Furthermore, we classified EEG with the subjectindependent manner to learn robust and generalized EEG features by avoiding subject dependency. The methodology also provides the advantage of avoiding the expense of building training data for each subject. With our proposed model, we obtained generalized heatmap patterns for all subjects. As a result, we can conclude that our proposed model provides neurophysiologically reliable interpretation.
NonGaussian Signatures of a Thermal Big Bang ; What if Big Bang was hot from its very inception This is possible in a bimetric theory where the source of fluctuations is thermal, requiring the model to live on a critical boundary in the space of parameters and can be realized when an antiDBI brane moves within an EAdS2 times E3 geometry. This setup renders the model unique, with sharp predictions for the scalar spectral index and its running. We investigate the nonGaussian signatures of this thermal bimetric model, or bithermal for short. We adapt the standard calculation of nonGaussianities for PX,phi models to the thermal nature of the model, emphasising how the bithermal peculiarities affect the calculation and alter results. This leads to precise predictions for the shape and amplitude of the threepoint function of the bithermal model at treelevel frm local rm NL 32 and frm equil rm NL 2 4 sqrt3pi9 simeq 0.4. We also discover a new shape of flattened nongaussianity propto k1k2k332 permutations, which is expected due to the excited thermal initial conditions. These results, along with our earlier predictions for the scalar power spectrum, provide sharp targets for the future generation of cosmological surveys.
or2yw Modeling and Visualizing OpenRefineHistories as YesWorkflow Diagrams ; OpenRefine is a popular opensource data cleaning tool. It allows users to export a previously executed data cleaning workflow in a JSON format for possible reuse on other datasets. We have developed or2yw, a novel tool that maps a JSONformatted OpenRefine operation history to a YesWorkflow YW model, which then can be visualized and queried using the YW tool. The latter was originally developed to allow researchers a simple way to annotate their program scripts in order to reveal the workflow steps and dataflow dependencies implicit in those scripts. With or2yw the user can automatically generate YW models from OpenRefine operation histories, thus providing a 'workflow view' on a previously executed sequence of data cleaning operations. The or2yw tool can generate different types of YesWorkflow models, e.g., a linear model which mirrors the sequential execution order of operations in OpenRefine, and a emphparallel model which reveals independent workflow branches, based on a simple analysis of dependencies between steps if two operations are independent of each other e.g., when the columns they read and write do not overlap then these can be viewed as parallel steps in the data cleaning workflow. The resulting YW models can be understood as a form of prospective provenance, i.e., knowledge artifacts that can be queried and visualized i to help authors document their own data cleaning workflows, thereby increasing transparency, and ii to help other users, who might want to reuse such workflows, to understand them better.
Visual Semantics Allow for Textual Reasoning Better in Scene Text Recognition ; Existing Scene Text Recognition STR methods typically use a language model to optimize the joint probability of the 1D character sequence predicted by a visual recognition VR model, which ignore the 2D spatial context of visual semantics within and between character instances, making them not generalize well to arbitrary shape scene text. To address this issue, we make the first attempt to perform textual reasoning based on visual semantics in this paper. Technically, given the character segmentation maps predicted by a VR model, we construct a subgraph for each instance, where nodes represent the pixels in it and edges are added between nodes based on their spatial similarity. Then, these subgraphs are sequentially connected by their root nodes and merged into a complete graph. Based on this graph, we devise a graph convolutional network for textual reasoning GTR by supervising it with a crossentropy loss. GTR can be easily plugged in representative STR models to improve their performance owing to better textual reasoning. Specifically, we construct our model, namely SGTR, by paralleling GTR to the language model in a segmentationbased STR baseline, which can effectively exploit the visuallinguistic complementarity via mutual learning. SGTR sets new stateoftheart on six challenging STR benchmarks and generalizes well to multilinguistic datasets. Code is available at httpsgithub.comadelinecsGTR.
Neighboring Backdoor Attacks on Graph Convolutional Network ; Backdoor attacks have been widely studied to hide the misclassification rules in the normal models, which are only activated when the model is aware of the specific inputs i.e., the trigger. However, despite their success in the conventional Euclidean space, there are few studies of backdoor attacks on graph structured data. In this paper, we propose a new type of backdoor which is specific to graph data, called neighboring backdoor. Considering the discreteness of graph data, how to effectively design the triggers while retaining the model accuracy on the original task is the major challenge. To address such a challenge, we set the trigger as a single node, and the backdoor is activated when the trigger node is connected to the target node. To preserve the model accuracy, the model parameters are not allowed to be modified. Thus, when the trigger node is not connected, the model performs normally. Under these settings, in this work, we focus on generating the features of the trigger node. Two types of backdoors are proposed 1 Linear Graph Convolution Backdoor which finds an approximation solution for the feature generation can be viewed as an integer programming problem by looking at the linear part of GCNs. 2 Variants of existing graph attacks. We extend current gradientbased attack methods to our backdoor attack scenario. Extensive experiments on two social networks and two citation networks datasets demonstrate that all proposed backdoors can achieve an almost 100 attack success rate while having no impact on predictive accuracy.
Validation of object detection in UAVbased images using synthetic data ; Object detection is increasingly used onboard Unmanned Aerial Vehicles UAV for various applications; however, the machine learning ML models for UAVbased detection are often validated using data curated for tasks unrelated to the UAV application. This is a concern because training neural networks on largescale benchmarks have shown excellent capability in generic object detection tasks, yet conventional training approaches can lead to large inference errors for UAVbased images. Such errors arise due to differences in imaging conditions between images from UAVs and images in training. To overcome this problem, we characterize boundary conditions of ML models, beyond which the models exhibit rapid degradation in detection accuracy. Our work is focused on understanding the impact of different UAVbased imaging conditions on detection performance by using synthetic data generated using a game engine. Properties of the game engine are exploited to populate the synthetic datasets with realistic and annotated images. Specifically, it enables the fine control of various parameters, such as camera position, view angle, illumination conditions, and object pose. Using the synthetic datasets, we analyze detection accuracy in different imaging conditions as a function of the above parameters. We use three wellknown neural network models with different model complexity in our work. In our experiment, we observe and quantify the following 1 how detection accuracy drops as the camera moves toward the nadirview region; 2 how detection accuracy varies depending on different object poses, and 3 the degree to which the robustness of the models changes as illumination conditions vary.
Unveiling ProjectSpecific Bias in Neural Code Models ; Neural code models have introduced significant improvements over many software analysis tasks like type inference, vulnerability detection, etc. Despite the good performance of such models under the common intraproject independent and identically distributed IID training and validation setting, we observe that they usually fail to generalize to realworld interproject outofdistribution OOD setting. In this work, we show that such phenomenon is caused by model heavily relying on projectspecific, ungeneralizable tokens like selfdefined variable and function names for downstream prediction, and we formulate it as the projectspecific bias learning behavior. We propose a measurement to interpret such behavior, termed as CondIdf, which combines cooccurrence probability and inverse document frequency to measure the level of relatedness of token with label and its projectspecificness. The approximation indicates that without proper regularization with prior knowledge, model tends to leverage spurious statistical cues for prediction. Equipped with these observations, we propose a bias mitigation mechanism Batch Partition Regularization BPR that regularizes model to infer based on proper behavior by leveraging latent logic relations among samples. Experimental results on two deep code benchmarks indicate that BPR can improve both interproject OOD generalization and adversarial robustness while not sacrificing accuracy on IID data.
Marginal Effects for NonLinear Prediction Functions ; Beta coefficients for linear regression models represent the ideal form of an interpretable feature effect. However, for nonlinear models and especially generalized linear models, the estimated coefficients cannot be interpreted as a direct feature effect on the predicted outcome. Hence, marginal effects are typically used as approximations for feature effects, either in the shape of derivatives of the prediction function or forward differences in prediction due to a change in a feature value. While marginal effects are commonly used in many scientific fields, they have not yet been adopted as a modelagnostic interpretation method for machine learning models. This may stem from their inflexibility as a univariate feature effect and their inability to deal with the nonlinearities found in black box models. We introduce a new class of marginal effects termed forward marginal effects. We argue to abandon derivatives in favor of betterinterpretable forward differences. Furthermore, we generalize marginal effects based on forward differences to multivariate changes in feature values. To account for the nonlinearity of prediction functions, we introduce a nonlinearity measure for marginal effects. We argue against summarizing feature effects of a nonlinear prediction function in a single metric such as the average marginal effect. Instead, we propose to partition the feature space to compute conditional average marginal effects on feature subspaces, which serve as conditional feature effect estimates.
A Large and Diverse Arabic Corpus for Language Modeling ; Language models LMs have introduced a major paradigm shift in Natural Language Processing NLP modeling where large pretrained LMs became integral to most of the NLP tasks. The LMs are intelligent enough to find useful and relevant representations of the language without any supervision. Perhaps, these models are used to finetune typical NLP tasks with significantly high accuracy as compared to the traditional approaches. Conversely, the training of these models requires a massively large corpus that is a good representation of the language. English LMs generally perform better than their other language counterparts, due to the availability of massive English corpora. This work elaborates on the design and development of a large Arabic corpus. It consists of over 500 GB of Arabic cleaned text targeted at improving crossdomain knowledge and downstream generalization capability of largescale language models. Moreover, the corpus is utilized in the training of a large Arabic LM. In order to evaluate the effectiveness of the LM, a number of typical NLP tasks are finetuned. The tasks demonstrate a significant boost from 4.5 to 8.5 when compared to tasks finetuned on multilingual BERT mBERT. To the best of my knowledge, this is currently the largest clean and diverse Arabic corpus ever collected.
A Stochastic Process Model for Time Warping Functions ; Time warping function provides a mathematical representation to measure phase variability in functional data. Recent studies have developed various approaches to estimate optimal warping between functions and provide nonEuclidean models. However, a principled, linear, generative model on time warping functions is still underexplored. This is a highly challenging problem because the space of warping functions is nonlinear with the conventional Euclidean metric. To address this problem, we propose a stochastic process model for time warping functions, where the key is to define a linear, innerproduct structure on the time warping space and then transform the warping functions into a subspace of the mathbb L2 Euclidean space. With certain constraints on the warping functions, this transformation is an isometric isomorphism. In the transformed space, we adopt the mathbb L2 basis in the Hilbert space for representation. This new framework can easily build generative model on time warping by using different types of stochastic process. It can also be used to conduct statistical inferences such as functional PCA, functional ANOVA, and functional regressions. Furthermore, we demonstrate the effectiveness of this new framework by using it as a new prior in the Bayesian registration, and propose an efficient gradient method to address the important maximum a posteriori estimation. We illustrate the new Bayesian method using simulations which properly characterize nonuniform and correlated constraints in the time domain. Finally, we apply the new framework to the famous Berkeley growth data and obtain reasonable results on modeling, resampling, group comparison, and classification analysis.
Transfer Learning In Differential Privacy's HybridModel ; The hybridmodel Avent et al 2017 in Differential Privacy is a an augmentation of the localmodel where in addition to N localagents we are assisted by one special agent who is in fact a curator holding the sensitive details of n additional individuals. Here we study the problem of machine learning in the hybridmodel where the n individuals in the curators dataset are drawn from a different distribution than the one of the general population the localagents. We give a general scheme SubsampleTestReweigh for this transfer learning problem, which reduces any curatormodel DPlearner to a hybridmodel learner in this setting using iterative subsampling and reweighing of the n examples held by the curator based on a smooth variation of the MultiplicativeWeights algorithm introduced by Bun et al, 2020. Our scheme has a sample complexity which relies on the chisquared divergence between the two distributions. We give worstcase analysis bounds on the sample complexity required for our private reduction. Aiming to reduce said sample complexity, we give two specific instances our sample complexity can be drastically reduced one instance is analyzed mathematically, while the other empirically and pose several directions for followup work.
A general model and toolkit for the ionization of three or more electrons in strongly driven atoms using an effective Coulomb potential for the interaction between bound electrons ; We formulate a threedimensional semiclassical model to address triple and double ionization in threeelectron atoms driven by intense infrared laser pulses. During time propagation, our model fully accounts for the Coulomb singularities, the magnetic field of the laser pulse and for the motion of the nucleus at the same time as for the motion of the three electrons. The framework we develop is general and can account for multielectron ionization in stronglydriven atoms with more than three electrons. To avoid unphysical autoionization arising in classical models of three or more electrons, we replace the Coulomb potential between pairs of bound electrons with effective Coulomb potentials. The Coulomb forces between electrons that are not both bound are fully accounted for. We develop a set of criteria to determine when electrons become bound during time propagation. We compare ionization spectra obtained with the model developed here and with the Heisenberg model that includes a potential term restricting an electron from closely approaching the core. Such spectra include the sum of the electron momenta along the direction of the laser field as well as the correlated electron momenta. We also compare these results with experimental ones.
A Nonlinear Hierarchical Model for Longitudinal Data on Manifolds ; Large longitudinal studies provide lots of valuable information, especially in medical applications. A problem which must be taken care of in order to utilize their full potential is that of correlation between intrasubject measurements taken at different times. For data in Euclidean space this can be done with hierarchical models, that is, models that consider intrasubject and betweensubject variability in two different stages. Nevertheless, data from medical studies often takes values in nonlinear manifolds. Here, as a first step, geodesic hierarchical models have been developed that generalize the linear ansatz by assuming that timeinduced intrasubject variations occur along a generalized straight line in the manifold. However, this is often not the case e.g., periodic motion or processes with saturation. We propose a hierarchical model for manifoldvalued data that extends this to include trends along higherorder curves, namely B'ezier splines in the manifold. To this end, we present a principled way of comparing shape trends in terms of a functionalbased Riemannian metric. Remarkably, this metric allows efficient, yet simple computations by virtue of a variational time discretization requiring only the solution of regression problems. We validate our model on longitudinal data from the osteoarthritis initiative, including classification of disease progression.
On Neural Differential Equations ; The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations NDEs demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series particularly in physics, finance, ... and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer highcapacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an indepth survey of the field. Topics include neural ordinary differential equations e.g. for hybrid neuralmechanistic modelling of physical systems; neural controlled differential equations e.g. for learning functions of irregular time series; and neural stochastic differential equations e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex highdimensional distributions. Further topics include numerical methods for NDEs e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction; symbolic regression for dynamical systems e.g. via regularised evolution; and deep implicit models e.g. deep equilibrium models, differentiable optimisation. We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.
Ethics, Rules of Engagement, and AI Neural Narrative Mapping Using Large Transformer Language Models ; The problem of determining if a military unit has correctly understood an order and is properly executing on it is one that has bedeviled military planners throughout history. The advent of advanced language models such as OpenAI's GPTseries offers new possibilities for addressing this problem. This paper presents a mechanism to harness the narrative output of large language models and produce diagrams or maps of the relationships that are latent in the weights of such models as the GPT3. The resulting Neural Narrative Maps NNMs, are intended to provide insight into the organization of information, opinion, and belief in the model, which in turn provide means to understand intent and response in the context of physical distance. This paper discusses the problem of mapping information spaces in general, and then presents a concrete implementation of this concept in the context of OpenAI's GPT3 language model for determining if a subordinate is following a commander's intent in a highrisk situation. The subordinate's locations within the NNM allow a novel capability to evaluate the intent of the subordinate with respect to the commander. We show that is is possible not only to determine if they are nearby in narrative space, but also how they are oriented, and what trajectory they are on. Our results show that our method is able to produce highquality maps, and demonstrate new ways of evaluating intent more generally.
Lossy Gradient Compression How Much Accuracy Can One Bit Buy ; In federated learning FL, a global model is trained at a Parameter Server PS by aggregating model updates obtained from multiple remote learners. Generally, the communication between the remote users and the PS is ratelimited, while the transmission from the PS to the remote users are unconstrained. The FL setting gives rise to the distributed learning scenario in which the updates from the remote learners have to be compressed so as to meet communication rate constraints in the uplink transmission toward the PS. For this problem, one wishes to compress the model updates so as to minimize the loss in accuracy resulting from the compression error. In this paper, we take a ratedistortion approach to address the compressor design problem for the distributed training of deep neural networks DNNs. In particular, we define a measure of the compression performance under communicationrate constraints the emphperbit accuracy which addresses the ultimate improvement of accuracy that a bit of communication brings to the centralized model. In order to maximize the perbit accuracy, we consider modeling the DNN gradient updates at remote learners as a generalized normal distribution. Under this assumption on the DNN gradient distribution, we propose a class of distortion measures to aid the design of quantizers for the compression of the model updates. We argue that this family of distortion measures, which we refer to as Mmagnitude weighted L2 norm, captures the practitioner's intuition in the choice of gradient compressor. Numerical simulations are provided to validate the proposed approach for the CIFAR10 dataset.
SODAR Segmenting Objects by DynamicallyAggregating Neighboring Mask Representations ; Recent stateoftheart onestage instance segmentation model SOLO divides the input image into a grid and directly predicts per grid cell object masks with fullyconvolutional networks, yielding comparably good performance as traditional twostage Mask RCNN yet enjoying much simpler architecture and higher efficiency. We observe SOLO generates similar masks for an object at nearby grid cells, and these neighboring predictions can complement each other as some may better segment certain object part, most of which are however directly discarded by nonmaximumsuppression. Motivated by the observed gap, we develop a novel learningbased aggregation method that improves upon SOLO by leveraging the rich neighboring information while maintaining the architectural efficiency. The resulting model is named SODAR. Unlike the original per grid cell object masks, SODAR is implicitly supervised to learn mask representations that encode geometric structure of nearby objects and complement adjacent representations with context. The aggregation method further includes two novel designs 1 a mask interpolation mechanism that enables the model to generate much fewer mask representations by sharing neighboring representations among nearby grid cells, and thus saves computation and memory; 2 a deformable neighbour sampling mechanism that allows the model to adaptively adjust neighbor sampling locations thus gathering mask representations with more relevant context and achieving higher performance. SODAR significantly improves the instance segmentation performance, e.g., it outperforms a SOLO model with ResNet101 backbone by 2.2 AP on COCO texttttest set, with only about 3 additional computation. We further show consistent performance gain with the SOLOv2 model.
Sampling Approximately LowRank Ising Models MCMC meets Variational Methods ; We consider Ising models on the hypercube with a general interaction matrix J, and give a polynomial time sampling algorithm when all but O1 eigenvalues of J lie in an interval of length one, a situation which occurs in many models of interest. This was previously known for the Glauber dynamics when all eigenvalues fit in an interval of length one; however, a single outlier can force the Glauber dynamics to mix torpidly. Our general result implies the first polynomial time sampling algorithms for lowrank Ising models such as Hopfield networks with a fixed number of patterns and Bayesian clustering models with lowdimensional contexts, and greatly improves the polynomial time sampling regime for the antiferromagneticferromagnetic Ising model with inconsistent field on expander graphs. It also improves on previous approximation algorithm results based on the naive meanfield approximation in variational methods and statistical physics. Our approach is based on a new fusion of ideas from the MCMC and variational inference worlds. As part of our algorithm, we define a new nonconvex variational problem which allows us to sample from an exponential reweighting of a distribution by a negative definite quadratic form, and show how to make this procedure provably efficient using stochastic gradient descent. On top of this, we construct a new simulated tempering chain on an extended state space arising from the HubbardStratonovich transform which overcomes the obstacle posed by large positive eigenvalues, and combine it with the SGDbased sampler to solve the full problem.
Dark Energy Stars in TolmanKuchowicz spacetime in the context of Einstein Gravity ; Dark energy is the component in the present Universe with the greatest abundance, and it is responsible for the accelerating expansion of the Universe. As a result, dark energy is likely to interact with any compact astrophysical object Muhammad F.A.R. Sakti and Anto Sulaksono, it Phys. Rev. D bf 103, 084042 2021. In present paper, we propose a model for a dark energy star made up of dark and ordinary matter in which the density of dark energy is proportional to the density of isotropic perfect fluid matter. In the context of general relativity, the model is derived in the curved TolmanKuchowicz spacetime geometry Tolman, Phys Rev 55364, 1939; Kuchowicz, Acta Phys Pol 33541, 1968. Here, we look at how dark energy affects stellar mass, compactness, and equilibrium etc. The physical parameters of the model e.g., pressure, density, mass function, surface redshift etc. are investigated, and the stability of stellar configuration is studied in detail. The model has interesting properties because it meets all energy criteria and is free from central singularities. The maximum allowable mass has been obtained from our model with the help of MR diagram. We analyse many physical properties of the model and checked that it meets all regularity constraints, is stable, and therefore physically realistic.
Capturing Actionable Dynamics with Structured Latent Ordinary Differential Equations ; Endtoend learning of dynamical systems with blackbox models, such as neural ordinary differential equations ODEs, provides a flexible framework for learning dynamics from data without prescribing a mathematical model for the dynamics. Unfortunately, this flexibility comes at the cost of understanding the dynamical system, for which ODEs are used ubiquitously. Further, experimental data are collected under various conditions inputs, such as treatments, or grouped in some way, such as part of subpopulations. Understanding the effects of these system inputs on system outputs is crucial to have any meaningful model of a dynamical system. To that end, we propose a structured latent ODE model that explicitly captures system input variations within its latent representation. Building on a static latent variable specification, our model learns independent stochastic factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space. This approach provides actionable modeling through the controlled generation of timeseries data for novel input combinations or perturbations. Additionally, we propose a flexible approach for quantifying uncertainties, leveraging a quantile regression formulation. Results on challenging biological datasets show consistent improvements over competitive baselines in the controlled generation of observational data and inference of biologically meaningful system inputs.
Reviewing local and integrated energy system models insights into flexibility and robustness challenges ; The electrification of heating, cooling, and transportation to reach decarbonization targets calls for a rapid expansion of renewable technologies. Due to their decentral and intermittent nature, these technologies require robust planning that considers nontechnical constraints and flexibility options to be integrated effectively. Energy system models ESMs are frequently used to support decisionmakers in this planning process. In this study, 116 case studies of local, integrated ESMs are systematically reviewed to identify bestpractice approaches to model flexibility and to address nontechnical constraints. Within the sample, storage systems and sector coupling are the most common types of flexibility. Sector coupling with the transportation sector, specifically with electric vehicles that could be used for smart charging or vehicletogrid operation, is rarely considered. Social aspects are generally either completely neglected or modeled exogenously. Lacking actor heterogeneity, which can lead to unstable results in optimization models, can be addressed through buildinglevel information. A strong emphasis on cost is found and while emissions are also frequently reported, additional metrics such as imports or the share of renewable generation are nearly absent. To guide future modeling, the paper concludes with a roadmap highlighting flexibility and robustness options that either represent lowhanging fruit or have a large impact on results.
GraphWorld Fake Graphs Bring Real Insights for GNNs ; Despite advances in the field of Graph Neural Networks GNNs, only a small number 5 of datasets are currently used to evaluate new models. This continued reliance on a handful of datasets provides minimal insight into the performance differences between models, and is especially challenging for industrial practitioners who are likely to have datasets which look very different from those used as academic benchmarks. In the course of our work on GNN infrastructure and opensource software at Google, we have sought to develop improved benchmarks that are robust, tunable, scalable,and generalizable. In this work we introduce GraphWorld, a novel methodology and system for benchmarking GNN models on an arbitrarilylarge population of synthetic graphs for any conceivable GNN task. GraphWorld allows a user to efficiently generate a world with millions of statistically diverse datasets. It is accessible, scalable, and easy to use. GraphWorld can be run on a single machine without specialized hardware, or it can be easily scaled up to run on arbitrary clusters or cloud frameworks. Using GraphWorld, a user has finegrained control over graph generator parameters, and can benchmark arbitrary GNN models with builtin hyperparameter tuning. We present insights from GraphWorld experiments regarding the performance characteristics of tens of thousands of GNN models over millions of benchmark datasets. We further show that GraphWorld efficiently explores regions of benchmark dataset space uncovered by standard benchmarks, revealing comparisons between models that have not been historically obtainable. Using GraphWorld, we also are able to study indetail the relationship between graph properties and task performance metrics, which is nearly impossible with the classic collection of realworld benchmarks.
On classification of strategic agents who can both game and improve ; In this work, we consider classification of agents who can both game and improve. For example, people wishing to get a loan may be able to take some actions that increase their perceived creditworthiness and others that also increase their true creditworthiness. A decisionmaker would like to define a classification rule with few falsepositives does not give out many bad loans while yielding many true positives giving out many good loans, which includes encouraging agents to improve to become true positives if possible. We consider two models for this problem, a general discrete model and a linear model, and prove algorithmic, learning, and hardness results for each. For the general discrete model, we give an efficient algorithm for the problem of maximizing the number of true positives subject to no false positives, and show how to extend this to a partialinformation learning setting. We also show hardness for the problem of maximizing the number of true positives subject to a nonzero bound on the number of false positives, and that this hardness holds even for a finitepoint version of our linear model. We also show that maximizing the number of true positives subject to no false positive is NPhard in our full linear model. We additionally provide an algorithm that determines whether there exists a linear classifier that classifies all agents accurately and causes all improvable agents to become qualified, and give additional results for lowdimensional data.
ThreePort Impedance Model and Validation of VSCs for Stability Analysis ; Modern power system is undergoing a paradigm shift from the synchronous generatorsbased system to the power electronics convertersdominated system. With the high penetration of converters, serious stability problems are provoked, especially the wideband oscillations. Various studies have been conducted in this respect, while most of them separate the acside stability with the dcside stability. However, for the stability analysis of the hybrid ACDC grid, it is necessary to consider the converter acside and dcside, simultaneously. In this paper, the stability analysis of voltage source converters VSCs considering both ac and dc dynamics is carried out. At first, the threeport ACDC admittance model of VSCs is established, and the corresponding measurement method from simulations is presented to validate its accuracy. Secondly, based on such threeport model, two stability analysis methods are presented the one is based on the system openloop model, where the stability can be judged via the Generalized Nyquist Criterion GNC; the other one is based on the system closedloop model, whose stability can be predicted through the polezero calculation. At last, a test ACDC system is built in MATLABSimulink, by which the effectiveness of the threeport modelbased stability analysis is validated.
Adversarial samples for deep monocular 6D object pose estimation ; Estimating 6D object pose from an RGB image is important for many realworld applications such as autonomous driving and robotic grasping. Recent deep learning models have achieved significant progress on this task but their robustness received little research attention. In this work, for the first time, we study adversarial samples that can fool deep learning models with imperceptible perturbations to input image. In particular, we propose a Unified 6D pose estimation Attack, namely U6DA, which can successfully attack several stateoftheart SOTA deep learning models for 6D pose estimation. The key idea of our U6DA is to fool the models to predict wrong results for object instance localization and shape that are essential for correct 6D pose estimation. Specifically, we explore a transferbased blackbox attack to 6D pose estimation. We design the U6DA loss to guide the generation of adversarial examples, the loss aims to shift the segmentation attention map away from its original position. We show that the generated adversarial samples are not only effective for direct 6D pose estimation models, but also are able to attack twostage models regardless of their robust RANSAC modules. Extensive experiments were conducted to demonstrate the effectiveness, transferability, and antidefense capability of our U6DA on largescale public benchmarks. We also introduce a new U6DALinemod dataset for robustness study of the 6D pose estimation task. Our codes and dataset will be available at urlhttpsgithub.comcuge1995U6DA.
UniXcoder Unified CrossModal Pretraining for Code Representation ; Pretrained models for programming languages have recently demonstrated great success on code intelligence. To support both coderelated understanding and generation tasks, recent works attempt to pretrain unified encoderdecoder models. However, such encoderdecoder framework is suboptimal for autoregressive tasks, especially code completion that requires a decoderonly manner for efficient inference. In this paper, we present UniXcoder, a unified crossmodal pretrained model for programming language. The model utilizes mask attention matrices with prefix adapters to control the behavior of the model and leverages crossmodal contents like AST and code comment to enhance code representation. To encode AST that is represented as a tree in parallel, we propose a onetoone mapping method to transform AST in a sequence structure that retains all structural information from the tree. Furthermore, we propose to utilize multimodal contents to learn representation of code fragment with contrastive learning, and then align representations among programming languages using a crossmodal generation task. We evaluate UniXcoder on five coderelated tasks over nine datasets. To further evaluate the performance of code fragment representation, we also construct a dataset for a new task, called zeroshot codetocode search. Results show that our model achieves stateoftheart performance on most tasks and analysis reveals that comment and AST can both enhance UniXcoder.
The BEHOMO project LTB Nbody simulations ; Our Universe may feature largescale inhomogeneities and anisotropies which cannot be explained by the standard model of cosmology, that is, the homogeneous and isotropic FLRW metric, on which the LambdaCDM model is built, may not describe accurately observations. Currently, there is not a satisfactory understanding of the evolution of the largescale structure on an inhomogeneous background. We start the cosmology beyond homogeneity and isotropy BEHOMO project and study the inhomogeneous LambdaLTB model with the methods of numerical cosmology. Understanding the evolution of the largescale structure is a necessary step to constrain inhomogeneous models with present and future observables and place the standard model on more solid grounds. We perform Newtonian Nbody simulations, whose accuracy in describing the background evolution is checked against the general relativistic solution. The largescale structure of the corresponding LambdaCDM simulation is also validated. We obtain the first set of simulations of the LambdaLTB model ever produced. The data products consist of 11 snapshots between redshift 0 and 3.7 for each of the 68 simulations that have been performed, together with halo catalogs and lens planes relative to 21 snapshots, between redshift 0 and 4.2, for a total of approximately 180 TB of data. We plan to study the growth of perturbations at the linear and nonlinear level, gravitational lensing, cluster abundances and proprieties. Data can be obtained upon request. Further information is available at valeriomarra.github.ioBEHOMOproject .
LEMON LanguagE ModeL for Negative Sampling of Knowledge Graph Embeddings ; Knowledge Graph Embedding models have become an important area of machine learning.Those models provide a latent representation of entities and relations in a knowledge graph which can then be used in downstream machine learning tasks such as link prediction. The learning process of such models can be performed by contrasting positive and negative triples. While all triples of a KG are considered positive, negative triples are usually not readily available. Therefore, the choice of the sampling method to obtain the negative triples play a crucial role in the performance and effectiveness of Knowledge Graph Embedding models. Most of the current methods fetch negative samples from a random distribution of entities in the underlying Knowledge Graph which also often includes meaningless triples. Other known methods use adversarial techniques or generative neural networks which consequently reduce the efficiency of the process. In this paper, we propose an approach for generating informative negative samples considering available complementary knowledge about entities. Particularly, Pretrained Language Models are used to form neighborhood clusters by utilizing the distances between entities to obtain representations of symbolic entities via their textual information. Our comprehensive evaluations demonstrate the effectiveness of the proposed approach on benchmark Knowledge Graphs with textual information for the link prediction task.
Stochastic factors and string stability of traffic flow Analytical investigation and numerical study based on carfollowing models ; The emergence dynamics of traffic instability has always attracted particular attention. For several decades, researchers have studied the stability of traffic flow using deterministic traffic models, with less emphasis on the presence of stochastic factors. However, recent empirical and theoretical findings have demonstrated that the stochastic factors tend to destabilize traffic flow and stimulate the concave growth pattern of traffic oscillations. In this paper, we derive a string stability condition of a general stochastic continuous carfollowing model by the mean of the generalized Lyapunov equation. We have found, indeed, that the presence of stochasticity destabilizes the traffic flow. The impact of stochasticity depends on both the sensitivity to the gap and the sensitivity to the velocity difference. Numerical simulations of three typical carfollowing models have been carried out to validate our theoretical analysis. Finally, we have calibrated and validated the stochastic carfollowing models against empirical data. It is found that the stochastic carfollowing models reproduce the observed traffic instability and capture the concave growth pattern of traffic oscillations. Our results further highlight theoretically and numerically that the stochastic factors have a significant impact on traffic dynamics.
Learning Maximum Margin Channel Decoders ; The problem of learning a channel decoder is considered for two channel models. The first model is an additive noise channel whose noise distribution is unknown and nonparametric. The learner is provided with a fixed codebook and a dataset comprised of independent samples of the noise, and is required to select a precision matrix for a nearest neighbor decoder in terms of the Mahalanobis distance. The second model is a nonlinear channel with additive white Gaussian noise and unknown channel transformation. The learner is provided with a fixed codebook and a dataset comprised of independent inputoutput samples of the channel, and is required to select a matrix for a nearest neighbor decoder with a linear kernel. For both models, the objective of maximizing the margin of the decoder is addressed. Accordingly, for each channel model, a regularized loss minimization problem with a codebookrelated regularization term and hingelike loss function is developed, which is inspired by the support vector machine paradigm for classification problems. Expected generalization error bounds for the error probability loss function are provided for both models, under optimal choice of the regularization parameter. For the additive noise channel, a theoretical guidance for choosing the training signaltonoise ratio is proposed based on this bound. In addition, for the nonlinear channel, a high probability uniform generalization error bound is provided for the hypothesis class. For each channel, a stochastic subgradient descent algorithm for solving the regularized loss minimization problem is proposed, and an optimization error bound is stated. The performance of the proposed algorithms is demonstrated through several examples.
ComplexValued Time Series Based Solar Irradiance Forecast ; This paper describes a new way to predict real time series using complexvalued elements. An example is given in the case of the shortterm probabilistic global solar irradiance forecasts with measurement as real part and an estimate of the volatility as imaginary part. A simple complex autoregressive model is tested with data collected in Corsica island France. Results show that, even if this approach is simple to set up and requires very little resource and data, both deterministic and probabilistic forecasts generated by this model are in agreement with experimental data root mean square error ranging from 0.196 to 0.325 considering all studied horizons. In addition, it exhibits sometimes a better accuracy than classical models like Gaussian process, bootstrap methodology or even more sophisticated model like quantile regression. The number of models that it is possible to build by generating complexvalued time series is substantial. Indeed, by using exogenous or ordinal variables and computed quantities coupled with complex or multicomplex numbers, many studies and many fields of physics could benefit from this methodology and from the many models that result from it.
On the benefits of knowledge distillation for adversarial robustness ; Knowledge distillation is normally used to compress a big network, or teacher, onto a smaller one, the student, by training it to match its outputs. Recently, some works have shown that robustness against adversarial attacks can also be distilled effectively to achieve good rates of robustness on mobilefriendly models. In this work, however, we take a different point of view, and show that knowledge distillation can be used directly to boost the performance of stateoftheart models in adversarial robustness. In this sense, we present a thorough analysis and provide general guidelines to distill knowledge from a robust teacher and boost the clean and adversarial performance of a student model even further. To that end, we present Adversarial Knowledge Distillation AKD, a new framework to improve a model's robust performance, consisting on adversarially training a student on a mixture of the original labels and the teacher outputs. Through carefully controlled ablation studies, we show that using earlystopping, model ensembles and weak adversarial training are key techniques to maximize performance of the student, and show that these insights generalize across different robust distillation techniques. Finally, we provide insights on the effect of robust knowledge distillation on the dynamics of the student network, and show that AKD mostly improves the calibration of the network and modify its training dynamics on samples that the model finds difficult to learn, or even memorize.
Deep Residual Error and BagofTricks Learning for Gravitational Wave Surrogate Modeling ; Deep learning methods have been employed in gravitationalwave astronomy to accelerate the construction of surrogate waveforms for the inspiral of spinaligned black hole binaries, among other applications. We face the challenge of modeling the residual error of an artificial neural network that models the coefficients of the surrogate waveform expansion especially those of the phase of the waveform which we demonstrate has sufficient structure to be learnable by a second network. Adding this second network, we were able to reduce the maximum mismatch for waveforms in a validation set by 13.4 times. We also explored several other ideas for improving the accuracy of the surrogate model, such as the exploitation of similarities between waveforms, the augmentation of the training set, the dissection of the input space, using dedicated networks per output coefficient and output augmentation. In several cases, small improvements can be observed, but the most significant improvement still comes from the addition of a second network that models the residual error. Since the residual error for more general surrogate waveform models when e.g., eccentricity is included may also have a specific structure, one can expect our method to be applicable to cases where the gain in accuracy could lead to significant gains in computational time.
Bilaterally Slimmable Transformer for Elastic and Efficient Visual Question Answering ; Recent advances in Transformer architectures 1 have brought remarkable improvements to visual question answering VQA. Nevertheless, Transformerbased VQA models are usually deep and wide to guarantee good performance, so they can only run on powerful GPU servers and cannot run on capacityrestricted platforms such as mobile phones. Therefore, it is desirable to learn an elastic VQA model that supports adaptive pruning at runtime to meet the efficiency constraints of different platforms. To this end, we present the bilaterally slimmable Transformer BST, a general framework that can be seamlessly integrated into arbitrary Transformerbased VQA models to train a single model once and obtain various slimmed submodels of different widths and depths. To verify the effectiveness and generality of this method, we integrate the proposed BST framework with three typical Transformerbased VQA approaches, namely MCAN 2, UNITER 3, and CLIPViL 4, and conduct extensive experiments on two commonlyused benchmark datasets. In particular, one slimmed MCANBST submodel achieves comparable accuracy on VQAv2, while being 0.38x smaller in model size and having 0.27x fewer FLOPs than the reference MCAN model. The smallest MCANBST submodel only has 9M parameters and 0.16G FLOPs during inference, making it possible to deploy it on a mobile device with less than 60 ms latency.
CodeGen An Open Large Language Model for Code with MultiTurn Program Synthesis ; Program synthesis strives to generate a computer program as a solution to a given problem specification, expressed with inputoutput examples or natural language descriptions. The prevalence of large language models advances the stateoftheart for program synthesis, though limited training resources and data impede open access to such models. To democratize this, we train and release a family of large language models up to 16.1B parameters, called CODEGEN, on natural language and programming language data, and open source the training library JAXFORMER. We show the utility of the trained model by demonstrating that it is competitive with the previous stateoftheart on zeroshot Python code generation on HumanEval. We further investigate the multistep paradigm for program synthesis, where a single program is factorized into multiple prompts specifying subproblems. To this end, we construct an open benchmark, MultiTurn Programming Benchmark MTPB, consisting of 115 diverse problem sets that are factorized into multiturn prompts. Our analysis on MTPB shows that the same intent provided to CODEGEN in multiturn fashion significantly improves program synthesis over that provided as a single turn. We make the training library JAXFORMER and model checkpoints available as open source contribution httpsgithub.comsalesforceCodeGen.
Decoupled Multitask Learning with Cyclical SelfRegulation for Face Parsing ; This paper probes intrinsic factors behind typical failure cases e.g. spatial inconsistency and boundary confusion produced by the existing stateoftheart method in face parsing. To tackle these problems, we propose a novel Decoupled Multitask Learning with Cyclical SelfRegulation DMLCSR for face parsing. Specifically, DMLCSR designs a multitask model which comprises face parsing, binary edge, and category edge detection. These tasks only share lowlevel encoder weights without highlevel interactions between each other, enabling to decouple auxiliary modules from the whole network at the inference stage. To address spatial inconsistency, we develop a dynamic dual graph convolutional network to capture global contextual information without using any extra pooling operation. To handle boundary confusion in both single and multiple face scenarios, we exploit binary and category edge detection to jointly obtain generic geometric structure and finegrained semantic clues of human faces. Besides, to prevent noisy labels from degrading model generalization during training, cyclical selfregulation is proposed to selfensemble several model instances to get a new model and the resulting model then is used to selfdistill subsequent models, through alternating iterations. Experiments show that our method achieves the new stateoftheart performance on the Helen, CelebAMaskHQ, and Lapa datasets. The source code is available at httpsgithub.comdeepinsightinsightfacetreemasterparsingdmlcsr.
Inverse Problems Are Solvable on Real Number Signal Processing Hardware ; Inverse problems are used to model numerous tasks in imaging sciences, in particular, they encompass any task to reconstruct data from measurements. Thus, the algorithmic solvability of inverse problems is of significant importance. The study of this question is inherently related to the underlying computing model and hardware, since the admissible operations of any implemented algorithm are defined by the computing model and the hardware. Turing machines provide the fundamental model of today's digital computers. However, it has been shown that Turing machines are incapable of solving finite dimensional inverse problems for any given accuracy. This stimulates the question of how powerful the computing model must be to enable the general solution of finite dimensional inverse problems. This paper investigates the general computation framework of BlumShubSmale BSS machines which allows the processing and storage of arbitrary real values. Although a corresponding real world computing device does not exist at the moment, research and development towards real number computing hardware, usually referred to by the term neuromorphic computing, has increased in recent years. In this work, we show that real number computing in the framework of BSS machines does enable the algorithmic solvability of finite dimensional inverse problems. Our results emphasize the influence of the considered computing model in questions of algorithmic solvability of inverse problems.
Towards LandauGinzburg models for cominuscule spaces via the exceptional cominuscule family ; We present projective LandauGinzburg models for the exceptional cominuscule homogeneous spaces mathbbOP2 E6mathrmscP6 and E7mathrmscP7, known respectively as the Cayley plane and the Freudenthal variety. These models are defined on the complement Xveemathrmcan of an anticanonical divisor of the Langlands dual homogeneous spaces mathbbXvee Pveebackslash Gvee in terms of generalized Plucker coordinates, analogous to the canonical models defined for Grassmannians, quadrics and Lagrangian Grassmannians in arXiv1307.1085, arXiv1404.4844, arXiv1304.4958. We prove that these models for the exceptional family are isomorphic to the Lietheoretic mirror models defined in arXivmath0511124 using a restriction to an algebraic torus, also known as the Lusztig torus, as proven in arXiv1912.09122. We also give a cluster structure on mathbbCmathbbXvee, prove that the Plucker coordinates form a Khovanskii basis for a valuation defined using the Lusztig torus, and compute the NewtonOkounkov body associated to this valuation. Although we present our methods for the exceptional types, they generalize immediately to the members of other cominuscule families.