text
stringlengths
62
2.94k
Componentwise Equivariant Estimation of Order Restricted Location and Scale Parameters In Bivariate Models A Unified Study ; The problem of estimating location scale parameters theta1 and theta2 of two distributions when the ordering between them is known apriori say, theta1leq theta2 has been extensively studied in the literature. Many of these studies are centered around deriving estimators that dominate the best location scale equivariant estimators, for the unrestricted case, by exploiting the prior information that theta1 leq theta2. Several of these studies consider specific distributions such that the associated random variables are statistically independent. This paper considers a general bivariate model and general loss function and unifies various results proved in the literature. We also consider applications of these results to a bivariate normal and a Cheriyan and Ramabhadran's bivariate gamma model. A simulation study is also considered to compare the risk performances of various estimators under bivariate normal and Cheriyan and Ramabhadran's bivariate gamma models.
On Jacobians of geometrically reduced curves and their Neron models ; We study the structure of Jacobians of geometrically reduced curves over arbitrary i. e., not necessarily perfect fields. We show that, while such a group scheme cannot in general be decomposed into an affine and an Abelian part as over perfect fields, several important structural results for these group schemes nevertheless have close analoga over nonperfect fields. We apply our results to prove two conjectures due to BoschLutkebohmertRaynaud about the existence of N'eron models and N'eron lftmodels over excellent Dedekind schemes in the special case of Jacobians of geometrically reduced curves. Finally, we prove some existence results for semifactorial models and related objects for general geometrically integral curves in the local case.
Multimodal datasets misogyny, pornography, and malignant stereotypes ; We have now entered the era of trillion parameter machine learning models trained on billionsized datasets scraped from the internet. The rise of these gargantuan datasets has given rise to formidable bodies of critical work that has called for caution while generating these large datasets. These address concerns surrounding the dubious curation practices used to generate these datasets, the sordid quality of alttext data available on the world wide web, the problematic content of the CommonCrawl dataset often used as a source for training large language models, and the entrenched biases in largescale visiolinguistic models such as OpenAI's CLIP model trained on opaque datasets WebImageText. In the backdrop of these specific calls of caution, we examine the recently released LAION400M dataset, which is a CLIPfiltered dataset of ImageAlttext pairs parsed from the CommonCrawl dataset. We found that the dataset contains, troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content. We outline numerous implications, concerns and downstream harms regarding the current state of large scale datasets while raising open questions for various stakeholders including the AI community, regulators, policy makers and data subjects.
Exact solution of the quantum integrable model associated with the twisted D23 algebra ; We generalize the nested offdiagonal Bethe ansatz method to study the quantum chain associated with the twisted D23 algebra or the D23 model with either periodic or integrable open boundary conditions. We obtain the intrinsic operator product identities among the fused transfer matrices and find a way to close the recursive fusion relations, which makes it possible to determinate eigenvalues of transfer matrices with an arbitrary anisotropic parameter eta. Based on them, and the asymptotic behaviors and values at certain points, we construct eigenvalues of transfer matrices in terms of homogeneous TQ relations for the periodic case and inhomogeneous ones for the open case with some offdiagonal boundary reflections. The associated Bethe ansatz equations are also given. The method and results in this paper can be generalized to the D2n1 model and other high rank integrable models.
A general model for wildfire propagation with wind and slope ; A geometric model for the computation of the firefront of a forest wildfire which takes into account several effects possibly timedependent wind, anisotropies and slope of the ground is introduced. It relies on a general theoretical framework, which reduces the hyperbolic PDE system of any wave to an ODE in a LorentzFinsler framework. The wind induces a sort of double semielliptical fire growth, while the influence of the slope is modeled by means of a term which comes from the Matsumoto metric i.e., the standard nonreversible Finsler metric that measures the time when going up and down a hill. These contributions make a significant difference from previous models because, now, the infinitesimal wavefronts are not restricted to be elliptical. Even though this is a technical complication, the wavefronts remain computable in real time. Some simulations of evolution are shown, paying special attention to possible crossovers of the fire.
Fitting large mixture models using stochastic component selection ; Traditional methods for unsupervised learning of finite mixture models require to evaluate the likelihood of all components of the mixture. This becomes computationally prohibitive when the number of components is large, as it is, for example, in the sumproduct transform networks. Therefore, we propose to apply a combination of the expectation maximization and the MetropolisHastings algorithm to evaluate only a small number of, stochastically sampled, components, thus substantially reducing the computational cost. The Markov chain of component assignments is sequentially generated across the algorithm's iterations, having a nonstationary target distribution whose parameters vary via a gradientdescent scheme. We put emphasis on generality of our method, equipping it with the ability to train both shallow and deep mixture models which involve complex, and possibly nonlinear, transformations. The performance of our method is illustrated in a variety of synthetic and realdata contexts, considering deep models, such as mixtures of normalizing flows and sumproduct transform networks.
Provenance in Temporal Interaction Networks ; In temporal interaction networks, vertices correspond to entities, which exchange data quantities e.g., money, bytes, messages over time. Tracking the origin of data that have reached a given vertex at any time can help data analysts to understand the reasons behind the accumulated quantity at the vertex or behind the interactions between entities. In this paper, we study data provenance in a temporal interaction network. We investigate alternative propagation models that may apply to different application scenarios. For each such model, we propose annotation mechanisms that track the origin of propagated data in the network and the routes of data quantities. Besides analyzing the space and time complexity of these mechanisms, we propose techniques that reduce their cost in practice, by either i limiting provenance tracking to a subset of vertices or groups of vertices, or ii tracking provenance only for quantities that were generated in the near past or limiting the provenance data in each vertex by a budget constraint. Our experimental evaluation on five real datasets shows that quantity propagation models based on generation time or receipt order scale well on large graphs; on the other hand, a model that propagates quantities proportionally has high space and time requirements and can benefit from the aforementioned cost reduction techniques.
Leveraging Transformers for StarCraft Macromanagement Prediction ; Inspired by the recent success of transformers in natural language processing and computer vision applications, we introduce a transformerbased neural architecture for two key StarCraft II SC2 macromanagement tasks global state and build order prediction. Unlike recurrent neural networks which suffer from a recency bias, transformers are able to capture patterns across very long time horizons, making them well suited for full game analysis. Our model utilizes the MSC Macromanagement in StarCraft II dataset and improves on the top performing gated recurrent unit GRU architecture in predicting global state and build order as measured by mean accuracy over multiple time horizons. We present ablation studies on our proposed architecture that support our design decisions. One key advantage of transformers is their ability to generalize well, and we demonstrate that our model achieves an even better accuracy when used in a transfer learning setting in which models trained on games with one racial matchup e.g., Terran vs. Protoss are transferred to a different one. We believe that transformers' ability to model long games, potential for parallelization, and generalization performance make them an excellent choice for StarCraft agents.
Mixed Mode Bursting Oscillations Induced by Birhythmicity and Noise ; Bursting oscillations are commonly seen as a mechanism for information coding in neuroscience and have also been observed in many physical, biochemical, and chemical systems. This study focuses on the computational investigation of mixedmode bursting oscillations MMBOs generated by a simple twodimensional integrateandfireorburst IFB model. We demonstrate a new paradigm for the generation of MMBOs, where birhythmicity and noise are the key components. In the absence of noise, the proposed model exhibits birhythmicity of two independent bursting patterns, bursts of two spikes and bursts of three spikes, depending on the initial condition of the model. Noise induces the random transitions between two bursting states which leads to MMBOs, and the transition rate increases with the noise intensity. Our results provide a systematic view of the roles of noise and initial condition the bursting dynamics produced by the proposed model heavily rely on the initial conditions when noise is weak; while for intermediate and strong noise, the burst dynamics are independent of the initial condition.
ModelAgnostic MetaAttack Towards Reliable Evaluation of Adversarial Robustness ; The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness. However, the progress is usually hampered by insufficient robustness evaluations. As the de facto standard to evaluate adversarial robustness, adversarial attacks typically solve an optimization problem of crafting adversarial examples with an iterative process. In this work, we propose a ModelAgnostic MetaAttack MAMA approach to discover stronger attack algorithms automatically. Our method learns the optimizer in adversarial attacks parameterized by a recurrent neural network, which is trained over a class of data samples and defenses to produce effective update directions during adversarial example generation. Furthermore, we develop a modelagnostic training algorithm to improve the generalization ability of the learned optimizer when attacking unseen defenses. Our approach can be flexibly incorporated with various attacks and consistently improves the performance with little extra computational cost. Extensive experiments demonstrate the effectiveness of the learned attacks by MAMA compared to the stateoftheart attacks on different defenses, leading to a more reliable evaluation of adversarial robustness.
Treebased local explanations of machine learning model predictions, AraucanaXAI ; Increasingly complex learning methods such as boosting, bagging and deep learning have made ML models more accurate, but harder to understand and interpret. A tradeoff between performance and intelligibility is often to be faced, especially in highstakes applications like medicine. In the present article we propose a novel methodological approach for generating explanations of the predictions of a generic ML model, given a specific instance for which the prediction has been made, that can tackle both classification and regression tasks. Advantages of the proposed XAI approach include improved fidelity to the original model, the ability to deal with nonlinear decision boundaries, and native support to both classification and regression problems
The Power of Prompt Tuning for LowResource Semantic Parsing ; Prompt tuning has recently emerged as an effective method for adapting pretrained language models to a number of language understanding and generation tasks. In this paper, we investigate prompt tuning for semantic parsing the task of mapping natural language utterances onto formal meaning representations. On the lowresource splits of Overnight and TOPv2, we find that a prompt tuned T5xl significantly outperforms its finetuned counterpart, as well as strong GPT3 and BART baselines. We also conduct ablation studies across different model scales and target representations, finding that, with increasing model scale, prompt tuned T5 models improve at generating target representations that are far from the pretraining distribution.
Joint Gaussian Graphical Model Estimation A Survey ; Graphs from complex systems often share a partial underlying structure across domains while retaining individual features. Thus, identifying common structures can shed light on the underlying signal, for instance, when applied to scientific discoveries or clinical diagnoses. Furthermore, growing evidence shows that the shared structure across domains boosts the estimation power of graphs, particularly for highdimensional data. However, building a joint estimator to extract the common structure may be more complicated than it seems, most often due to data heterogeneity across sources. This manuscript surveys recent work on statistical inference of joint Gaussian graphical models, identifying model structures that fit various data generation processes. Simulations under different data generation processes are implemented with detailed discussions on the choice of models.
Unified Style Transfer ; Currently, it is hard to compare and evaluate different style transfer algorithms due to chaotic definitions of style and the absence of agreed objective validation methods in the study of style transfer. In this paper, a novel approach, the Unified Style Transfer UST model, is proposed. With the introduction of a generative model for internal style representation, UST can transfer images in two approaches, i.e., Domainbased and Imagebased, simultaneously. At the same time, a new philosophy based on the human sense of art and style distributions for evaluating the transfer model is presented and demonstrated, called Statistical Style Analysis. It provides a new path to validate style transfer models' feasibility by validating the general consistency between internal style representation and art facts. Besides, the translationinvariance of AdaIN features is also discussed.
TopicGuided Abstractive MultiDocument Summarization ; A critical point of multidocument summarization MDS is to learn the relations among various documents. In this paper, we propose a novel abstractive MDS model, in which we represent multiple documents as a heterogeneous graph, taking semantic nodes of different granularities into account, and then apply a graphtosequence framework to generate summaries. Moreover, we employ a neural topic model to jointly discover latent topics that can act as crossdocument semantic units to bridge different documents and provide global information to guide the summary generation. Since topic extraction can be viewed as a special type of summarization that summarizes texts into a more abstract format, i.e., a topic distribution, we adopt a multitask learning strategy to jointly train the topic and summarization module, allowing the promotion of each other. Experimental results on the MultiNews dataset demonstrate that our model outperforms previous stateoftheart MDS models on both Rouge metrics and human evaluation, meanwhile learns highquality topics.
SYNERGY Building Task Bots at Scale Using Symbolic Knowledge and Machine Teaching ; In this paper we explore the use of symbolic knowledge and machine teaching to reduce human data labeling efforts in building neural task bots. We propose SYNERGY, a hybrid learning framework where a task bot is developed in two steps i Symbolic knowledge to neural networks Large amounts of simulated dialog sessions are generated based on taskspecific symbolic knowledge which is represented as a task schema consisting of dialog flows and taskoriented databases. Then a pretrained neural dialog model, SOLOIST, is finetuned on the simulated dialogs to build a bot for the task. ii Neural learning The finetuned neural dialog model is continually refined with a handful of real taskspecific dialogs via machine teaching, where training samples are generated by human teachers interacting with the task bot. We validate SYNERGY on four dialog tasks. Experimental results show that SYNERGY maps taskspecific knowledge into neural dialog models achieving greater diversity and coverage of dialog flows, and continually improves model performance with machine teaching, thus demonstrating strong synergistic effects of symbolic knowledge and machine teaching.
Robustness via Uncertaintyaware Cycle Consistency ; Unpaired imagetoimage translation refers to learning interimagedomain mapping without corresponding image pairs. Existing methods learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty, leading to performance degradation when encountering unseen perturbations at test time. To address this, we propose a novel probabilistic method based on Uncertaintyaware Generalized Adaptive Cycle Consistency UGAC, which models the perpixel residual by generalized Gaussian distribution, capable of modelling heavytailed distributions. We compare our model with a wide variety of stateoftheart methods on various challenging tasks including unpaired image translation of natural images, using standard datasets, spanning autonomous driving, maps, facades, and also in medical imaging domain consisting of MRI. Experimental results demonstrate that our method exhibits stronger robustness towards unseen perturbations in test data. Code is released here httpsgithub.comExplainableMLUncertaintyAwareCycleConsistency.
Critical velocity for vortex nucleation and roton emission in a generalized model for superfluids ; We study numerically the process of vortex nucleation at the wake of a moving object in superfluids using a generalized and nonlocal GrossPitaevskii model. The nonlocal potential is set to reproduce the roton minimum present in the excitation spectrum of superfluid helium. By applying numerically a NewtonRaphson method we determine the bifurcation diagram for different types of nonlinearities and object sizes which allow for determining the corresponding critical velocities. In the case of a nonlocal potential, we observe that for small object sizes the critical velocity is simply determined by the Landau criterion for superfluidity whereas for large objects there is little difference between all models studied. Finally, we study dynamically in two and three dimensions how rotons and vortices are excited in the nonlocal model of superfluid.
New Tsallis Holographic Dark Energy ; Tsallis entropy is a generalization of the BoltzmannGibbs entropy in statistical theory which uses a parameter delta to measure the deviation from the standard scenario quantitatively. Using concepts of Tsallis entropy and future event horizon, we construct a new Tsallis holographic dark energy model. The parameters c and delta will be used to characterize various aspects of the model. Analytical expressions for various cosmological parameters such as the differential equation describing the evolution of the effective dark energy density parameter, the equation of state parameter and the deceleration parameter are obtained. The equation of state parameter for the current model exhibits the pure quintessence behaviour for c1, quintom behaviour for c1 whereas the LambdaCDM model is recovered for c1. To analyze the thermal history of the universe, we obtained the expression for the deceleration parameter and found that for z approx 0.6, the phase transits from deceleration to acceleration.
Diversity and Generalization in Neural Network Ensembles ; Ensembles are widely used in machine learning and, usually, provide stateoftheart performance in many prediction tasks. From the very beginning, the diversity of an ensemble has been identified as a key factor for the superior performance of these models. But the exact role that diversity plays in ensemble models is poorly understood, specially in the context of neural networks. In this work, we combine and expand previously published results in a theoretically sound framework that describes the relationship between diversity and ensemble performance for a wide range of ensemble methods. More precisely, we provide sound answers to the following questions how to measure diversity, how diversity relates to the generalization error of an ensemble, and how diversity is promoted by neural network ensemble algorithms. This analysis covers three widely used loss functions, namely, the squared loss, the crossentropy loss, and the 01 loss; and two widely used model combination strategies, namely, model averaging and weighted majority vote. We empirically validate this theoretical analysis with neural network ensembles.
Constraining nonminimally coupled exponential inflation with CMB data ; The betaexponential inflation is driven by a class of primordial potentials, derived in the framework of braneworld scenarios, that generalizes the wellknown power law inflation. In this paper we update previous constraints on the minimal coupled betaexponential model 1 and extend the results also deriving the equations for the nonminimal coupled scenario. The predictions of both models are tested in light of the latest temperature and polarization maps of the Cosmic Microwave Background and clustering data. We also compare the predictions of these models with the standard LambdaCDM cosmology using the Deviance Information Criterion DIC, and find that the observational data show a moderate preference for the nonminimally coupled betaexponential inflationary model.
PnPOOD OutOfDistribution Detection for Text Classification via Plug andPlay Data Augmentation ; While Outofdistribution OOD detection has been well explored in computer vision, there have been relatively few prior attempts in OOD detection for NLP classification. In this paper we argue that these prior attempts do not fully address the OOD problem and may suffer from data leakage and poor calibration of the resulting models. We present PnPOOD, a data augmentation technique to perform OOD detection via outofdomain sample generation using the recently proposed Plug and Play Language Model Dathathri et al., 2020. Our method generates high quality discriminative samples close to the class boundaries, resulting in accurate OOD detection at test time. We demonstrate that our model outperforms prior models on OOD sample detection, and exhibits lower calibration error on the 20 newsgroup text and Stanford Sentiment Treebank dataset Lang, 1995; Socheret al., 2013. We further highlight an important data leakage issue with datasets used in prior attempts at OOD detection, and share results on a new dataset for OOD detection that does not suffer from the same problem.
Spectral dimension of simple random walk on a longrange percolation cluster ; Consider the longrange percolation model on the integer lattice mathbbZd in which all nearestneighbour edges are present and otherwise x and y are connected with probability qx,y1expxys, independently of the state of other edges. Throughout the regime where the model yields a locallyfinite graph, i.e. for sd, we determine the spectral dimension of the associated simple random walk, apart from at the exceptional value d1, s2, where the spectral dimension is discontinuous. Towards this end, we present various ondiagonal heat kernel bounds, a number of which are new. In particular, the lower bounds are derived through the application of a general technique that utilises the translation invariance of the model. We highlight that, applying this general technique, we are able to partially extend our main result beyond the nearestneighbour setting, and establish lower heat kernel bounds over the range of parameters sin d,2d. We further note that our approach is applicable to shortrange models as well.
Recent Advances in Natural Language Processing via Large PreTrained Language Models A Survey ; Large, pretrained transformerbased language models such as BERT have drastically changed the Natural Language Processing NLP field. We present a survey of recent work that uses these large language models to solve NLP tasks via pretraining then finetuning, prompting, or text generation approaches. We also present approaches that use pretrained language models to generate data for training augmentation or other purposes. We conclude with discussions on limitations and suggested directions for future research.
A SemiSupervised Approach for Automatic Crystal Structure Classification ; The structural solution problem can be a daunting and time consuming task. Especially in the presence of impurity phases, current methods such as indexing become more unstable. In this work, we apply the novel approach of semisupervised learning towards the problem of identifying the Bravais lattice and the space group of inorganic crystals. Our semisupervised generative deep learning model can train on both labeled data diffraction patterns with the associated crystal structure and unlabeled data, diffraction patterns that lack this information. This approach allows our models to take advantage of the troves of unlabeled data that current supervised learning approaches cannot, which should result in models that can more accurately generalize to real data. In this work, we classify powder diffraction patterns into all 14 Bravais lattices and 144 space groups we limit the number due to sparse coverage in crystal structure databases, which covers more crystal classes than other studies. Our models also drastically outperform current deep learning approaches for both space group and Bravais Lattice classification using less training data.
Floquet engineering of Kitaev quantum magnets ; In recent years, there has been an intense search for materials realizing the Kitaev quantum spin liquid model. A number of edgeshared compounds with strong spinorbit coupling, such as RuCl3 and iridates, have been proposed to realize this model. Nevertheless, an effective spin Hamiltonian derived from the microscopic model relevant to these compounds generally contains terms that are antagonistic toward the quantum spin liquid. This is consistent with the fact the zero magnetic field ground state of these materials is generally magnetically ordered. It is a pressing issue to identify protocols to drive the system to the limit of the Kitaev quantum spin model. In this work, we propose Floquet engineering of these Kitaev quantum magnets by coupling materials to a circularly polarized laser. We demonstrate that all the magnetic interactions can be tuned in situ by the amplitude and frequency of the laser, hence providing a route to stabilize the Kitaev quantum spin liquid phase.
Detection of Hate Speech using BERT and Hate Speech Word Embedding with Deep Model ; The enormous amount of data being generated on the web and social media has increased the demand for detecting online hate speech. Detecting hate speech will reduce their negative impact and influence on others. A lot of effort in the Natural Language Processing NLP domain aimed to detect hate speech in general or detect specific hate speech such as religion, race, gender, or sexual orientation. Hate communities tend to use abbreviations, intentional spelling mistakes, and coded words in their communication to evade detection, adding more challenges to hate speech detection tasks. Thus, word representation will play an increasingly pivotal role in detecting hate speech. This paper investigates the feasibility of leveraging domainspecific word embedding in Bidirectional LSTM based deep model to automatically detectclassify hate speech. Furthermore, we investigate the use of the transfer learning language model BERT on hate speech problem as a binary classification task. The experiments showed that domainspecific word embedding with the Bidirectional LSTM based deep model achieved a 93 f1score while BERT achieved up to 96 f1score on a combined balanced dataset from available hate speech datasets.
UQuAD1.0 Development of an Urdu Question Answering Training Data for Machine Reading Comprehension ; In recent years, lowresource Machine Reading Comprehension MRC has made significant progress, with models getting remarkable performance on various language datasets. However, none of these models have been customized for the Urdu language. This work explores the semiautomated creation of the Urdu Question Answering Dataset UQuAD1.0 by combining machinetranslated SQuAD with humangenerated samples derived from Wikipedia articles and Urdu RC worksheets from Cambridge Olevel books. UQuAD1.0 is a largescale Urdu dataset intended for extractive machine reading comprehension tasks consisting of 49k question Answers pairs in question, passage, and answer format. In UQuAD1.0, 45000 pairs of QA were generated by machine translation of the original SQuAD1.0 and approximately 4000 pairs via crowdsourcing. In this study, we used two types of MRC models rulebased baseline and advanced Transformerbased models. However, we have discovered that the latter outperforms the others; thus, we have decided to concentrate solely on Transformerbased architectures. Using XLMRoBERTa and multilingual BERT, we acquire an F1 score of 0.66 and 0.63, respectively.
Competitive Algorithms for Online Weighted Bipartite Matching and its Variants ; Online bipartite matching has been extensively studied. In the unweighted setting, Karp et al. gave an optimal 1 1ecompetitive randomized algorithm. In the weighted setting, optimal algorithms have been achieved only under assumptions on the edge weights. For the general case, little was known beyond the trivial 12competitive greedy algorithm. Recently, Fahrbach et al. have presented an 0.5086competitive algorithm for the problem in a model, namely freedisposal, overcoming the longstanding barrier of 12. Besides, in designing competitive algorithms for the online matching problem and its variants, several techniques have been developed, in particular the primaldual method. Specifically, Devanur et al. gave a primaldual framework, unifying previous approaches and Devanur and Jain provided another scheme for a generalization of the online matching problem. In this paper, we present competitive algorithms for the online weighted bipartite matching in different models; in particular we achieve the optimal 11e competitive ratio in the freedisposal model and in other model, namely stochastic reward. Our work also unifies the previous approaches by the mean of the primaldual technique with configuration linear programs.
Generative Adversarial Network for Probabilistic Forecast of Random Dynamical System ; We present a deep learning model for datadriven simulations of random dynamical systems without a distributional assumption. The deep learning model consists of a recurrent neural network, which aims to learn the time marching structure, and a generative adversarial network GAN to learn and sample from the probability distribution of the random dynamical system. Although GANs provide a powerful tool to model a complex probability distribution, the training often fails without a proper regularization. Here, we propose a regularization strategy for a GAN based on consistency conditions for the sequential inference problems. First, the maximum mean discrepancy MMD is used to enforce the consistency between conditional and marginal distributions of a stochastic process. Then, the marginal distributions of the multiplestep predictions are regularized by using MMD or from multiple discriminators. The behavior of the proposed model is studied by using three stochastic processes with complex noise structures.
Data Selection for Efficient Model Update in Federated Learning ; The Federated Learning FL workflow of training a centralized model with distributed data is growing in popularity. However, until recently, this was the realm of contributing clients with similar computing capability. The fast expanding IoT space and data being generated and processed at the edge are encouraging more effort into expanding federated learning to include heterogeneous systems. Previous approaches distribute lightweight models to clients are rely on knowledge transfer to distil the characteristic of local data in partitioned updates. However, their additional knowledge exchange transmitted through the network degrades the communication efficiency of FL. We propose to reduce the size of knowledge exchanged in these FL setups by clustering and selecting only the most representative bits of information from the clients. The partitioned global update adopted in our work splits the global deep neural network into a lower part for generic feature extraction and an upper part that is more sensitive to this selected client knowledge. Our experiments show that only 1.6 of the initially exchanged data can effectively transfer the characteristic of the client data to the global model in our FL approach, using split networks. These preliminary results evolve our understanding of federated learning by demonstrating efficient training using strategically selected training samples.
Effectiveonebody waveforms for precessing coalescing compact binaries with postnewtonian Twist ; Spin precession is a generic feature of compact binary coalescences, which leaves clear imprints in the gravitational waveforms. Building on previous work, we present an efficient time domain inspiralmergerringdown effectiveonebody model EOB for precessing binary black holes, which incorporates subdominant modes beyond ell2, and the first EOB frequency domain approximant for precessing binary neutron stars. We validate our model against 99 short'' numerical relativity precessing waveforms, where we find median mismatches of 5times 103, 7 times 103 at inclinations of 0, pi3, and 21 long'' waveforms with median mismatches of 4 times 103 and 5 times 103 at the same inclinations. Further comparisons against the stateoftheart textttNRSur7dq4 waveform model yield median mismatches of 4times 103, 1.8 times 102 at inclinations of 0, pi3 for 5000 precessing configurations with the precession parameter chip up to 0.8 and mass ratios up to 4. To demonstrate the computational efficiency of our model we apply it to parameter estimation and reanalyze the gravitationalwave events GW150914, GW190412, and GW170817.
Batch Reinforcement Learning from Crowds ; A shortcoming of batch reinforcement learning is its requirement for rewards in data, thus not applicable to tasks without reward functions. Existing settings for lack of reward, such as behavioral cloning, rely on optimal demonstrations collected from humans. Unfortunately, extensive expertise is required for ensuring optimality, which hinder the acquisition of largescale data for complex tasks. This paper addresses the lack of reward in a batch reinforcement learning setting by learning a reward function from preferences. Generating preferences only requires a basic understanding of a task. Being a mental process, generating preferences is faster than performing demonstrations. So preferences can be collected at scale from nonexpert humans using crowdsourcing. This paper tackles a critical challenge that emerged when collecting data from nonexpert humans the noise in preferences. A novel probabilistic model is proposed for modelling the reliability of labels, which utilizes labels collaboratively. Moreover, the proposed model smooths the estimation with a learned reward function. Evaluation on Atari datasets demonstrates the effectiveness of the proposed model, followed by an ablation study to analyze the relative importance of the proposed ideas.
Evaluating Predictive Uncertainty and Robustness to Distributional Shift Using Real World Data ; Most machine learning models operate under the assumption that the training, testing and deployment data is independent and identically distributed i.i.d.. This assumption doesn't generally hold true in a natural setting. Usually, the deployment data is subject to various types of distributional shifts. The magnitude of a model's performance is proportional to this shift in the distribution of the dataset. Thus it becomes necessary to evaluate a model's uncertainty and robustness to distributional shifts to get a realistic estimate of its expected performance on realworld data. Present methods to evaluate uncertainty and model's robustness are lacking and often fail to paint the full picture. Moreover, most analysis so far has primarily focused on classification tasks. In this paper, we propose more insightful metrics for general regression tasks using the Shifts Weather Prediction Dataset. We also present an evaluation of the baseline methods using these metrics.
FINO Flowbased Joint Image and Noise Model ; One of the fundamental challenges in image restoration is denoising, where the objective is to estimate the clean image from its noisy measurements. To tackle such an illposed inverse problem, the existing denoising approaches generally focus on exploiting effective natural image priors. The utilization and analysis of the noise model are often ignored, although the noise model can provide complementary information to the denoising algorithms. In this paper, we propose a novel Flowbased joint Image and NOise model FINO that distinctly decouples the image and noise in the latent space and losslessly reconstructs them via a series of invertible transformations. We further present a variable swapping strategy to align structural information in images and a noise correlation matrix to constrain the noise based on spatially minimized correlation information. Experimental results demonstrate FINO's capacity to remove both synthetic additive white Gaussian noise AWGN and real noise. Furthermore, the generalization of FINO to the removal of spatially variant noise and noise with inaccurate estimation surpasses that of the popular and stateoftheart methods by large margins.
Learning Neural Models for ContinuousTime Sequences ; The large volumes of data generated by human activities such as online purchases, health records, spatial mobility etc. are stored as a sequence of events over a continuous time. Learning deep learning methods over such sequences is a nontrivial task as it involves modeling the everincreasing event timestamps, interevent time gaps, event types, and the influences between events within and across different sequences. This situation is further exacerbated by the constraints associated with data collection e.g. limited data, incomplete sequences, privacy restrictions etc. With the research direction described in this work, we aim to study the properties of continuoustime event sequences CTES and design robust yet scalable neural networkbased models to overcome the aforementioned problems. In this work, we model the underlying generative distribution of events using marked temporal point processes MTPP to address a wide range of realworld problems. Moreover, we highlight the efficacy of the proposed approaches over the stateoftheart baselines and later report the ongoing research problems.
Modeling and Analysis of the Landing Gear System with the Generalized Contracts ; Nowadays, there are several complex systems in different sectors such as aviation, air traffic control ...etc. These systems do not have a precise perimeter, they are open and made of various specific components built with different languages and environments. The modeling, assembly and analysis of such open and complex heterogeneous systems are challenges in software engineering. This paper describes how the Minarets method decreases the difficulty of modeling, composition and analysis of the well known case study of the landing gear system. The method consists in equipping individual components with generalized contracts that integrate various facets related to different concerns, composing these components according to their facets and verifying the resulting system with respect to the involved facets as well. The proposed method may be used or extended to cover more facets, and by strengthening assistance tool through proactive aspects in modeling, composing multifacets contracts and finally the verification of the heterogeneous systems.
Dynamic imaging using motioncompensated smoothness regularization on manifolds MoCoSToRM ; We introduce an unsupervised deep manifold learning algorithm for motioncompensated dynamic MRI. We assume that the motion fields in a freebreathing lung MRI dataset live on a manifold. The motion field at each time instant is modeled as the output of a deep generative model, driven by lowdimensional timevarying latent vectors that capture the temporal variability. The images at each time instant are modeled as the deformed version of an image template using the above motion fields. The template, the parameters of the deep generator, and the latent vectors are learned from the kt space data in an unsupervised fashion. The manifold motion model serves as a regularizer, making the joint estimation of the motion fields and images from few radial spokesframe wellposed. The utility of the algorithm is demonstrated in the context of motioncompensated highresolution lung MRI.
Composing Partial Differential Equations with PhysicsAware Neural Networks ; We introduce a compositional physicsaware FInite volume Neural Network FINN for learning spatiotemporal advectiondiffusion processes. FINN implements a new way of combining the learning abilities of artificial neural networks with physical and structural knowledge from numerical simulation by modeling the constituents of partial differential equations PDEs in a compositional manner. Results on both one and twodimensional PDEs Burgers', diffusionsorption, diffusionreaction, AllenCahn demonstrate FINN's superior modeling accuracy and excellent outofdistribution generalization ability beyond initial and boundary conditions. With only one tenth of the number of parameters on average, FINN outperforms pure machine learning and other stateoftheart physicsaware models in all cases often even by multiple orders of magnitude. Moreover, FINN outperforms a calibrated physical model when approximating sparse realworld data in a diffusionsorption scenario, confirming its generalization abilities and showing explanatory potential by revealing the unknown retardation factor of the observed process.
The Macroeconomic Effects of Corporate Tax Reforms ; This paper extends a standard general equilibrium framework with a corporate tax code featuring two key elements tax depreciation policy and the distinction between ccorporations and passthrough businesses. In the model, the stimulative effect of a tax rate cut on ccorporations is smaller when tax depreciation policy is accelerated, and is further diluted in the aggregate by the presence of passthrough entities. Because of a highly accelerated tax depreciation policy and a large share of passthrough activity in 2017, the model predicts small stimulus, large payouts to shareholders, and a dramatic loss of corporate tax revenues following the Tax Cuts and Jobs Act TCJA17. These predictions are consistent with novel micro and macrolevel evidence from professional forecasters and sectoral tax returns. At the same time, because of lessaccelerated tax depreciation and a lower passthrough share in the early 1960s, the model predicts sizable stimulus in response to the Kennedy's corporate tax cuts also supported by the data. The modelimplied corporate tax multipliers for Trump's TCJA17 and Kennedy's tax cuts are 0.6 and 2.5, respectively.
Continuoustime Markov chain as a generic traitbased evolutionary model ; More than ever, today we are left with the abundance of molecular data outpaced by the advancements of the phylogenomic methods. Especially in the case of presence of many genes over a set of species under the phylogeny question, more sophisticated methods than the crude way of concatenation is needed. In this letter, by placing the continuoustime Markov chain CTMC on the species set, I present a novel model for inferring the phylogeny, obtaining the network graph, or drawing the proximity conclusions. The rate of transition between states is calculated based on the binary character paths between each two species. This is the base for the pairwise distances between species. Next to its generic use, the formulation of the model allows the sitewise phylogenetic inference and a mathematically justified method of combining these information to form as big as the whole genome phylogenetic inference. Although based on the characters or traits, this model is inherently a distance method but its advantage to other methods of the same class is its ability to incorporate the information of all the other species in forming the pairwise distance between two them.
Score Transformer Generating Musical Score from Notelevel Representation ; In this paper, we explore the tokenized representation of musical scores using the Transformer model to automatically generate musical scores. Thus far, sequence models have yielded fruitful results with notelevel MIDIequivalent symbolic representations of music. Although the notelevel representations can comprise sufficient information to reproduce music aurally, they cannot contain adequate information to represent music visually in terms of notation. Musical scores contain various musical symbols e.g., clef, key signature, and notes and attributes e.g., stem direction, beam, and tie that enable us to visually comprehend musical content. However, automated estimation of these elements has yet to be comprehensively addressed. In this paper, we first design score token representation corresponding to the various musical elements. We then train the Transformer model to transcribe notelevel representation into appropriate music notation. Evaluations of popular piano scores show that the proposed method significantly outperforms existing methods on all 12 musical aspects that were investigated. We also explore an effective notationlevel token representation to work with the model and determine that our proposed representation produces the steadiest results.
Improving Controllability of Educational Question Generation by Keyword Provision ; Question Generation QG receives increasing research attention in NLP community. One motivation for QG is that QG significantly facilitates the preparation of educational reading practice and assessments. While the significant advancement of QG techniques was reported, current QG results are not ideal for educational reading practice assessment in terms of textitcontrollability and textitquestion difficulty. This paper reports our results toward the two issues. First, we report a stateoftheart examlike QG model by advancing the current best model from 11.96 to 20.19 in terms of BLEU 4 score. Second, we propose to investigate a variant of QG setting by allowing users to provide keywords for guiding QG direction. We also present a simple but effective model toward the QG controllability task. Experiments are also performed and the results demonstrate the feasibility and potentials of improving QG diversity and controllability by the proposed keyword provision QG model.
AttackCentric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models ; Transferability of adversarial samples became a serious concern due to their impact on the reliability of machine learning system deployments, as they find their way into many critical applications. Knowing factors that influence transferability of adversarial samples can assist experts to make informed decisions on how to build robust and reliable machine learning systems. The goal of this study is to provide insights on the mechanisms behind the transferability of adversarial samples through an attackcentric approach. This attackcentric perspective interprets how adversarial samples would transfer by assessing the impact of machine learning attacks that generated them on a given input dataset. To achieve this goal, we generated adversarial samples using attacker models and transferred these samples to victim models. We analyzed the behavior of adversarial samples on victim models and outlined four factors that can influence the transferability of adversarial samples. Although these factors are not necessarily exhaustive, they provide useful insights to researchers and practitioners of machine learning systems.
Noise Distribution Adaptive SelfSupervised Image Denoising using Tweedie Distribution and Score Matching ; Tweedie distributions are a special case of exponential dispersion models, which are often used in classical statistics as distributions for generalized linear models. Here, we reveal that Tweedie distributions also play key roles in modern deep learning era, leading to a distribution independent selfsupervised image denoising formula without clean reference images. Specifically, by combining with the recent Noise2Score selfsupervised image denoising approach and the saddle point approximation of Tweedie distribution, we can provide a general closedform denoising formula that can be used for large classes of noise distributions without ever knowing the underlying noise distribution. Similar to the original Noise2Score, the new approach is composed of two successive steps score matching using perturbed noisy images, followed by a closed form image denoising formula via distributionindependent Tweedie's formula. This also suggests a systematic algorithm to estimate the noise model and noise parameters for a given noisy image data set. Through extensive experiments, we demonstrate that the proposed method can accurately estimate noise models and parameters, and provide the stateoftheart selfsupervised image denoising performance in the benchmark dataset and realworld dataset.
A Multirate DiscontinuousGalerkininTime Framework for InterfaceCoupled Problems ; A framework is presented to design multirate time stepping algorithms for two dissipative models with coupling across a physical interface. The coupling takes the form of boundary conditions imposed on the interface, relating the solution variables for both models to each other. The multirate aspect arises when numerical time integration is performed with different time step sizes for the component models. In this paper, we seek to identify a unified approach to develop multirate algorithms for these coupled problems. This effort is pursued though the use of discontinuousGalerkin time stepping methods, acting as a general unified framework, with different time step sizes. The subproblems are coupled across userdefined intervals of time, called it coupling windows, using polynomials that are continuous on the window. The coupling method is shown to reproduce the correct interfacial energy dissipation, discrete conservation of fluxes, and asymptotic accuracy. In principle, methods of arbitrary order are possible. As a first step, herein we focus on the presentation and analysis of monolithic methods for advectiondiffusion models coupled via generalized Robintype conditions. The monolithic methods could be computed using a Schurcomplement approach. We conclude with some discussion of future developments, such as different interface conditions and partitioned methods.
New Cosmological Solutions of a Nonlocal Gravity Model ; A nonlocal gravity model 2.1 was introduced and considered recently 49, and two exact cosmological solutions in flat space were presented. The first solution is related to some radiation effects generated by nonlocal dynamics on dark energy background, while the second one is a nonsingular time symmetric bounce. In the present paper we investigate other possible exact cosmological solutions and find some the new ones in nonflat space. Used nonlocal gravity dynamics can change background topology. To solve the corresponding eqations of motion, we first look for a solution of the eigenvalue problem Box R 4Lambda q R 4Lambda . We also discuss possible extension of this model with nonlocal operator symmetric under Box longleftrightarrow Box1 and its connection with another interesting nonlocal gravity model.
CGANEB A Nonparametric Empirical Bayes Method for Crash Hotspot Identification Using Conditional Generative Adversarial Networks A Realworld Crash Data Study ; The empirical Bayes EB method based on parametric statistical models such as the negative binomial NB has been widely used for ranking sites in road network safety screening process. This paper is the continuation of the authors previous research, where a novel nonparametric EB method for modelling crash frequency data data based on Conditional Generative Adversarial Networks CGAN was proposed and evaluated over several simulated crash data sets. Unlike parametric approaches, there is no need for a prespecified underlying relationship between dependent and independent variables in the proposed CGANEB and they are able to model any types of distributions. The proposed methodology is now applied to a realworld data set collected for road segments from 2012 to 2017 in Washington State. The performance of CGANEB in terms of model fit, predictive performance and network screening outcomes is compared with the conventional approach NBEB as a benchmark. The results indicate that the proposed CGANEB approach outperforms NBEB in terms of prediction power and hotspot identification tests.
PixelStega Generative Image Steganography Based on Autoregressive Models ; In this letter, we explored generative image steganography based on autoregressive models. We proposed PixelStega, which implements pixellevel information hiding with autoregressive models and arithmetic coding algorithm. Firstly, one of the autoregressive models, PixelCNN, is utilized to produce explicit conditional probability distribution of each pixel. Secondly, secret messages are encoded to the selection of pixels through steganographic sampling stegosampling based on arithmetic coding. We carried out qualitative and quantitative assessment on grayscale and colour image datasets. Experimental results show that PixelStega is able to embed secret messages adaptively according to the entropy of the pixels to achieve both high embedding capacity up to 4.3 bpp and nearly perfect imperceptibility about 50 detection accuracy.
Bayesian rotation inversion of KIC 11145123 ; A scheme of Bayesian rotation inversion, which allows us to compute the probability of a model of a stellar rotational profile, is developed. The validation of the scheme with simple rotational profiles and the corresponding sets of artificially generated rotational shifts has been successfully carried out, and we can correctly distinguish the right rotational model, prepared beforehand for generating the artificial rotational shifts, with the other wrong rotational model. The Bayesian scheme is applied to a gamma Dordelta Sct type hybrid star, KIC 11145123, leading to a result that the convective core of the star might be rotating much faster 10 times faster than the other regions of the star. The result is consistent with that previously suggested by Hatta et al. 2019 based on a 3zone modeling, further strengthening their argument from a Bayesian point of view.
Are E2E ASR models ready for an industrial usage ; The Automated Speech Recognition ASR community experiences a major turning point with the rise of the fullyneural EndtoEnd, E2E approaches. At the same time, the conventional hybrid model remains the standard choice for the practical usage of ASR. According to previous studies, the adoption of E2E ASR in realworld applications was hindered by two main limitations their ability to generalize on unseen domains and their high operational cost. In this paper, we investigate both abovementioned drawbacks by performing a comprehensive multidomain benchmark of several contemporary E2E models and a hybrid baseline. Our experiments demonstrate that E2E models are viable alternatives for the hybrid approach, and even outperform the baseline both in accuracy and in operational efficiency. As a result, our study shows that the generalization and complexity issues are no longer the major obstacle for industrial integration, and draws the community's attention to other potential limitations of the E2E approaches in some specific usecases.
Quantum dynamics in 1D lattice models with synthetic horizons ; We investigate the wave packet dynamics and eigenstate localization in recently proposed generalized lattice models whose lowenergy dynamics mimics a quantum field theory in 11D curved spacetime with the aim of creating systems analogous to black holes. We identify a critical slowdown of zeroenergy wave packets in a family of 1D tightbinding models with powerlaw variation of the hopping parameter, indicating the presence of a horizon. Remarkably, wave packets with nonzero energies bounce back and reverse direction before reaching the horizon. We additionally observe a powerlaw localization of all eigenstates, each bordering a region of exponential suppression. These forbidden regions dictate the closest possible approach to the horizon of states with any given energy. These numerical findings are supported by a semiclassical description of the wave packet trajectories, which are shown to coincide with the geodesics expected for the effective metric emerging from the considered lattice models in the continuum limit.
Changes in the distribution of observed annual maximum temperatures in Europe ; In this study we consider the problem of detecting and quantifying changes in the distribution of the annual maximum daily maximum temperature TXx in a large gridded data set of European daily temperature during the years 19502018. Several statistical models are considered, each of which models TXx using a generalized extreme value GEV distribution with the GEV parameters varying smoothly over space. In contrast to several previous studies which fit independent GEV models at the grid box level, our models pull information from neighbouring grid boxes for more efficient parameter estimation. The GEV location and scale parameters are allowed to vary in time using the log of atmospheric CO2 as a covariate. Changes are detected most strongly in the GEV location parameter with the TXx distributions generally shifting towards hotter temperatures. Averaged across our spatial domain, the 100year return level of TXx based on the 2018 climate is approximately 2degC hotter than that based on the 1950 climate. Moreover, also averaging across our spatial domain, the 100year return level of TXx based on the 1950 climate corresponds approximately to a 6year return level in the 2018 climate.
Vertexbased Diagrammatic Treatment of LightMatterCoupled Systems ; We propose a diagrammatic Monte Carlo approach for general spinboson models, which can be regarded as a generalization of the strongcoupling expansion for fermionic impurity models. The algorithm is based on a selfconsistently computed threepoint vertex and a stochastically sampled fourpoint vertex, and achieves convergence to the numerically exact result in a wide parameter regime. The performance of the algorithm is demonstrated with applications to a spinboson model representing an emitter in a waveguide. As a function of the coupling strength, the spin exhibits a delocalizationlocalization crossover at low temperatures, signaling a qualitative change in the realtime relaxation. In certain parameter regimes, the response functions of the emitter coupled to the electromagnetic continuum can be described by an effective Rabi model with appropriately defined parameters. We also discuss the spatial distribution of the photon density around the emitter.
A General Statistical Mechanics Framework for the Collective Motion of Animals ; We propose a general statistical mechanics framework for the collective motion of animals. The framework considers the principle of maximum entropy, the interaction, boundary, and desire effects, as well as the timedelay effect. These factors provide the ability to describe and solve dynamic and nonequilibrium problems under this framework. We show that the Vicsek model, the social force model, and some of their variants can be considered special cases of this framework. Furthermore, this framework can be extended to the maximum caliber setting. We demonstrate the potential of this framework for model comparisons and parameter estimations by applying the model to observed data from a field study of the emergent behavior of termites. Finally, we demonstrate the flexibility of the framework by simulating some collective moving phenomena for birds and ants.
Transformer Embeddings of Irregularly Spaced Events and Their Participants ; The neural Hawkes process Mei Eisner, 2017 is a generative model of irregularly spaced sequences of discrete events. To handle complex domains with many event types, Mei et al. 2020a further consider a setting in which each event in the sequence updates a deductive database of facts via domainspecific patternmatching rules; future events are then conditioned on the database contents. They show how to convert such a symbolic system into a neurosymbolic continuoustime generative model, in which each database fact and the possible event has a timevarying embedding that is derived from its symbolic provenance. In this paper, we modify both models, replacing their recurrent LSTMbased architectures with flatter attentionbased architectures Vaswani et al., 2017, which are simpler and more parallelizable. This does not appear to hurt our accuracy, which is comparable to or better than that of the original models as well as where applicable previous attentionbased methods Zuo et al., 2020; Zhang et al., 2020a.
Tackling critical slowing down using global correction steps with equivariant flows the case of the Schwinger model ; We propose a new method for simulating lattice gauge theories in the presence of fermions. The method combines flowbased generative models for local gauge field updates and hierarchical updates of the factorized fermion determinant. The flowbased generative models are restricted to proposing updates to gaugefields within subdomains, thus keeping training times moderate while increasing the global volume. We apply our method performs to the 2dimensional 2D Schwinger model with Nf2 Wilson Dirac fermions and show that no critical slowing down is observed in the sampling of topological sectors up to beta8.45. Furthermore, we show that fluctuations can be suppressed exponentially with the distance between active subdomains, allowing us to achieve acceptance rates of up to 99 for the outermost acceptreject step on lattices volumes of up to V128times128.
Interpolative fusions II Preservation results ; We study interpolative fusion, a method of combining theories T1 and T2 in distinct languages in a generic way over a common reduct Tcap, to obtain a theory Tcup. When each Ti is modelcomplete, Tcup is the model companion of the union T1cup T2. Our goal is to prove preservation results, i.e., to find sufficient conditions under which modeltheoretic properties of T1 and T2 are inherited by Tcup. We first prove preservation results for quantifier elimination, modelcompleteness, and related properties. We then apply these tools to show that, under mild hypotheses, including stability of Tcap, the property mathrmNSOP1 is preserved. We also show that simplicity is preserved under stronger hypotheses on algebraic closure in T1 and T2. This generalizes many previous results; for example, simplicity of mathrmACFA and the random nhypergraph are both nonobvious corollaries. We also address preservation of stability, mathrmNIP, and aleph0categoricity, and we describe examples which witness that these results are sharp.
Macroscopic loops in the Bose gas, Spin ON and related models ; We consider a general system of interacting random loops which includes several models of interest, such as the Spin ON model, random lattice permutations, a version of the interacting Bose gas in discrete space and of the loop ON model. We consider the system in mathbbZd, d geq 3, and prove the occurrence of macroscopic loops whose length is proportional to the volume of the system. More precisely, we approximate mathbbZd by finite boxes and, given any two vertices whose distance is proportional to the diameter of the box, we prove that the probability of observing a loop visiting both is uniformly positive. Our results hold under general assumptions on the interaction potential, which may have bounded or unbounded support or introduce hardcore constraints.
Robustness against Read Committed for Transaction Templates with Functional Constraints ; The popular isolation level Multiversion Read Committed RC trades some of the strong guarantees of serializability for increased transaction throughput. Sometimes, transaction workloads can be safely executed under RC obtaining serializability at the lower cost of RC. Such workloads are said to be robust against RC. Previous work has yielded a tractable procedure for deciding robustness against RC for workloads generated by transaction programs modeled as transaction templates. An important insight of that work is that, by more accurately modeling transaction programs, we are able to recognize larger sets of workloads as robust. In this work, we increase the modeling power of transaction templates by extending them with functional constraints, which are useful for capturing data dependencies like foreign keys. We show that the incorporation of functional constraints can identify more workloads as robust that otherwise would not be. Even though we establish that the robustness problem becomes undecidable in its most general form, we show that various restrictions on functional constraints lead to decidable and even tractable fragments that can be used to model and test for robustness against RC for realistic scenarios.
Multiphonic modeling using Impulse Pattern Formulation IPF ; Multiphonics, the presence of multiple pitches within the sound, can be produced in several ways. In wind instruments, they can appear at low blowing pressure when complex fingerings are used. Such multiphonics can be modeled by the Impulse Pattern Formulation IPF. This topdown method regards musical instruments as systems working with impulses originating from a generating entity, travel through the instrument, are reflected at various positions, and are exponentially damped. Eventually, impulses return to the generating entity and retrigger or interact with subsequent impulses. Due to this straightforward approach, the IPF can explain fundamental principles of complex dynamic systems. While modeling wind instruments played with blowing pressures at the threshold of tone onset, the IPF captures transitions between regular periodicity at nominal pitch, bifurcations, and noise. This corresponds to behavior found in wind instruments where multiphonics appear at the transition between noise and regular musical note regimes. Using the IPF, complex fingerings correspond to multiple reflection points at open finger holes with different reflection strengths. Multiphonics can be modeled if reflection points farther away show higher reflection strength and thus, disrupt periodic motion. The IPF can also synthesize multiphonic sounds by concatenating typical wind instrument waveforms at adjacent impulse time points.
Concise Logarithmic Loss Function for Robust Training of Anomaly Detection Model ; Recently, deep learningbased algorithms are widely adopted due to the advantage of being able to establish anomaly detection models without or with minimal domain knowledge of the task. Instead, to train the artificial neural network more stable, it should be better to define the appropriate neural network structure or the loss function. For the training anomaly detection model, the mean squared error MSE function is adopted widely. On the other hand, the novel loss function, logarithmic mean squared error LMSE, is proposed in this paper to train the neural network more stable. This study covers a variety of comparisons from mathematical comparisons, visualization in the differential domain for backpropagation, loss convergence in the training process, and anomaly detection performance. In an overall view, LMSE is superior to the existing MSE function in terms of strongness of loss convergence, anomaly detection performance. The LMSE function is expected to be applicable for training not only the anomaly detection model but also the general generative neural network.
Robust uncertainty estimates with outofdistribution pseudoinputs training ; Probabilistic models often use neural networks to control their predictive uncertainty. However, when making outofdistribution OOD predictions, the oftenuncontrollable extrapolation properties of neural networks yield poor uncertainty predictions. Such models then don't know what they don't know, which directly limits their robustness w.r.t unexpected inputs. To counter this, we propose to explicitly train the uncertainty predictor where we are not given data to make it reliable. As one cannot train without data, we provide mechanisms for generating pseudoinputs in informative lowdensity regions of the input space, and show how to leverage these in a practical Bayesian framework that casts a prior distribution over the model uncertainty. With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining stateoftheart performance on diverse tasks such as regression and generative modelling
COLD A Benchmark for Chinese Offensive Language Detection ; Offensive language detection is increasingly crucial for maintaining a civilized social media platform and deploying pretrained language models. However, this task in Chinese is still under exploration due to the scarcity of reliable datasets. To this end, we propose a benchmark COLD for Chinese offensive language analysis, including a Chinese Offensive Language Dataset COLDATASET and a baseline detector COLDETECTOR which is trained on the dataset. We show that the COLD benchmark contributes to Chinese offensive language detection which is challenging for existing resources. We then deploy the COLDETECTOR and conduct detailed analyses on popular Chinese pretrained language models. We first analyze the offensiveness of existing generative models and show that these models inevitably expose varying degrees of offensive issues. Furthermore, we investigate the factors that influence the offensive generations, and we find that antibias contents and keywords referring to certain groups or revealing negative attitudes trigger offensive outputs easier.
Reconstruction of Incomplete Wildfire Data using Deep Generative Models ; We present our submission to the Extreme Value Analysis 2021 Data Challenge in which teams were asked to accurately predict distributions of wildfire frequency and size within spatiotemporal regions of missing data. For the purpose of this competition we developed a variant of the powerful variational autoencoder models dubbed the Conditional Missing data ImportanceWeighted Autoencoder CMIWAE. Our deep latent variable generative model requires little to no feature engineering and does not necessarily rely on the specifics of scoring in the Data Challenge. It is fully trained on incomplete data, with the single objective to maximize loglikelihood of the observed wildfire information. We mitigate the effects of the relatively low number of training samples by stochastic sampling from a variational latent variable distribution, as well as by ensembling a set of CMIWAE models trained and validated on different splits of the provided data. The presented approach is not domainspecific and is amenable to application in other missing data recovery tasks with tabular or imagelike information conditioned on auxiliary information.
Control of portHamiltonian differentialalgebraic systems and applications ; The modeling framework of portHamiltonian descriptor systems and their use in numerical simulation and control are discussed. The structure is ideal for automated networkbased modeling since it is invariant under powerconserving interconnection, congruence transformations, and Galerkin projection. Moreover, stability and passivity properties are easily shown. Condensed forms under orthogonal transformations present easy analysis tools for existence, uniqueness, regularity, and numerical methods to check these properties. After recalling the concepts for general linear and nonlinear descriptor systems, we demonstrate that many difficulties that arise in general descriptor systems can be easily overcome within the portHamiltonian framework. The properties of portHamiltonian descriptor systems are analyzed, timediscretization, and numerical linear algebra techniques are discussed. Structurepreserving regularization procedures for descriptor systems are presented to make them suitable for simulation and control. Model reduction techniques that preserve the structure and stabilization and optimal control techniques are discussed. The properties of portHamiltonian descriptor systems and their use in modeling simulation and control methods are illustrated with several examples from different physical domains. The survey concludes with open problems and research topics that deserve further attention.
Lost Horizon Modeling Black Holes in String Theory ; The modeling of black holes is an important desideratum for any quantum theory of gravity. Not only is a classical black hole metric sought, but also agreement with the laws of black hole thermodynamics. In this paper, we describe how these goals are obtained in string theory. We review black hole thermodynamics, and then explicate the general stringy derivation of classical spacetimes, the construction of a simple black hole solution, and the derivation of its entropy. With that in hand, we address some important philosophical and conceptual questions the confirmatory value of the derivation, the bearing of the model on recent discussions of the socalled 'information paradox', and the implications of the model for the nature of space.
LOSTIN Logic Optimization via SpatioTemporal Information with Hybrid Graph Models ; Despite the stride made by machine learning ML based performance modeling, two major concerns that may impede productionready ML applications in EDA are stringent accuracy requirements and generalization capability. To this end, we propose hybrid graph neural network GNN based approaches towards highly accurate qualityofresult QoR estimations with great generalization capability, specifically targeting logic synthesis optimization. The key idea is to simultaneously leverage spatiotemporal information from hardware designs and logic synthesis flows to forecast performance i.e., delayarea of various synthesis flows on different designs. The structural characteristics inside hardware designs are distilled and represented by GNNs; the temporal knowledge i.e., relative ordering of logic transformations in synthesis flows can be imposed on hardware designs by combining a virtually added supernode or a sequence processing model with conventional GNN models. Evaluation on 3.3 million data points shows that the testing mean absolute percentage error MAPE on designs seen and unseen during training are no more than 1.2 and 3.1, respectively, which are 715X lower than existing studies.
Persistence and stability of a class of kinetic compartmental models ; In this paper we show that the dynamics of a class of kinetic compartmental models with bounded capacities, monotone reaction rates and a strongly connected interconnection structure is persistent. The result is based on the chemical reaction network CRN and the corresponding Petri net representation of the system. For the persistence analysis, it is shown that all siphons in the Petri net of the studied model class can be characterized efficiently. Additionally, the existence and stability of equilibria are also analyzed building on the persistence and the theory of general compartmental systems. The obtained results can be applied in the analysis of general kinetic models based on the simple exclusion principle.
Do You See What I See Capabilities and Limits of Automated Multimedia Content Analysis ; The everincreasing amount of usergenerated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from inperson work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools. This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare usergenerated content with existing and known content. Predictive models including computer vision and computer audition are machine learning techniques that aim to identify characteristics of new or previously unknown content.
AlmostC1 splines Biquadratic splines on unstructured quadrilateral meshes and their application to fourth order problems ; Isogeometric Analysis generalizes classical finite element analysis and intends to integrate it with the field of ComputerAided Design. A central problem in achieving this objective is the reconstruction of analysissuitable models from ComputerAided Design models, which is in general a nontrivial and timeconsuming task. In this article, we present a novel spline construction, that enables model reconstruction as well as simulation of highorder PDEs on the reconstructed models. The proposed almostC1 are biquadratic splines on fully unstructured quadrilateral meshes without restrictions on placements or number of extraordinary vertices. They are C1 smooth almost everywhere, that is, at all vertices and across most edges, and in addition almost i.e. approximately C1 smooth across all other edges. Thus, the splines form H2nonconforming analysissuitable discretization spaces. This is the lowestdegree unstructured spline construction that can be used to solve fourthorder problems. The associated spline basis is nonsingular and has several Bsplinelike properties e.g., partition of unity, nonnegativity, local support, the almostC1 splines are described in an explicit B'ezierextractionbased framework that can be easily implemented. Numerical tests suggest that the basis is wellconditioned and exhibits optimal approximation behavior.
A model of nonminimally coupled gravitation and electromagnetism in 12 dimensions ; Following earlier works of Dereli and collaborators, we study a three dimensional toy model where we extend the topologically massive gravity with electrodynamics by the most general RF2type nonminimal coupling terms. Here R denotes the possible curvature terms and F denotes the electromagnetic 2form. We derive the variational field equations and look for exact solutions on constant negative curvature spacetimes with a constant, selfdual electromagnetic field. The notion of selfdual electromagnetic fields in three dimensions is introduced by Dereli and collaborators in the study of exact solutions of models with gravityelectromagnetism couplings. We note the conditions that the parameters of the model have to satisfy for these selfdual solutions to exist.
Denoising Diffusion Restoration Models ; Many interesting tasks in image restoration can be cast as linear inverse problems. A recent family of approaches for solving these problems uses stochastic algorithms that sample from the posterior distribution of natural images given the measurements. However, efficient solutions often require problemspecific supervised training to model the posterior, whereas unsupervised methods that are not problemspecific typically rely on inefficient iterative methods. This work addresses these issues by introducing Denoising Diffusion Restoration Models DDRM, an efficient, unsupervised posterior sampling method. Motivated by variational inference, DDRM takes advantage of a pretrained denoising diffusion generative model for solving any linear inverse problem. We demonstrate DDRM's versatility on several image datasets for superresolution, deblurring, inpainting, and colorization under various amounts of measurement noise. DDRM outperforms the current leading unsupervised methods on the diverse ImageNet dataset in reconstruction quality, perceptual quality, and runtime, being 5x faster than the nearest competitor. DDRM also generalizes well for natural images out of the distribution of the observed ImageNet training set.
On the Convergence of Heterogeneous Federated Learning with Arbitrary Adaptive Online Model Pruning ; One of the biggest challenges in Federated Learning FL is that client devices often have drastically different computation and communication resources for local updates. To this end, recent research efforts have focused on training heterogeneous local models obtained by pruning a shared global model. Despite empirical success, theoretical guarantees on convergence remain an open question. In this paper, we present a unifying framework for heterogeneous FL algorithms with em arbitrary adaptive online model pruning and provide a general convergence analysis. In particular, we prove that under certain sufficient conditions and on both IID and nonIID data, these algorithms converges to a stationary point of standard FL for general smooth cost functions, with a convergence rate of Ofrac1sqrtQ. Moreover, we illuminate two key factors impacting convergence pruninginduced noise and minimum coverage index, advocating a joint design of local pruning masks for efficient training.
GertsenshteinZel'dovich effect explains the origin of Fast Radio Bursts ; We present a novel model that explains the origin of Fast Radio Bursts FRBs short 1rms, bright 0.1 1000rmJy bursts of GHz frequency radio waves. The model has three ingredients compact object, progenitor with effective magnetic field strength around 1010rm Gauss, and GHz frequency gravitational waves GWs. The energy conversion from GWs to electromagnetic waves occurs when GWs pass through the magnetosphere of such compact objects due to the GertsenshteinZel'dovich effect. This conversion produces bursts of electromagnetic waves in the GHz range, leading to FRBs. Our model has three key features i can generate peakflux up to 1000rm Jy, ii can naturally explain the pulsewidth and iii predict FRB's random and repeating nature with a wide flux range. We thus conclude that the millisecond pulsars could be the progenitor of FRBs. Further, our model offers a novel perspective on the indirection detection of GWs at highfrequency beyond detection capabilities. Thus, transient events like FRBs are a rich source for the current era of multimessenger astronomy.
Exploring the consequences of cyber attacks on Powertrain Cyber Physical Systems ; This paper proposes a novel approach for the study of cyberattacks against the powertrain of a generic vehicle. The proposed model is composed by a a generic Internal Combustion engine and a speed controller, that communicate through a Controller Area Network CAN bus. We consider a threat model composed by three representative attack scenarios designed to modify the output of the model, thus affecting the rotational speed of the engine. Two attack scenarios target both vehicle sensor systems and CAN communication, while one attack scenario only requires injection of CAN messages. To the best of our knowledge, this is the first attempt of modeling the consequences of realistic cyber attacks against a modern vehicle.
Ranking with Confidence for Large Scale Comparison Data ; In this work, we leverage a generative data model considering comparison noise to develop a fast, precise, and informative ranking algorithm from pairwise comparisons that produces a measure of confidence on each comparison. The problem of ranking a large number of items from noisy and sparse pairwise comparison data arises in diverse applications, like ranking players in online games, document retrieval or ranking human perceptions. Although different algorithms are available, we need fast, largescale algorithms whose accuracy degrades gracefully when the number of comparisons is too small. Fitting our proposed model entails solving a nonconvex optimization problem, which we tightly approximate by a sum of quasiconvex functions and a regularization term. Resorting to an iterative reweighted minimization and the PrimalDual Hybrid Gradient method, we obtain PDRank, achieving a Kendall tau 0.1 higher than all comparing methods, even for 10 of wrong comparisons in simulated data matching our data model, and leading in accuracy if data is generated according to the BradleyTerry model, in both cases faster by one order of magnitude, in seconds. In real data, PDRank requires less computational time to achieve the same Kendall tau than active learning methods.
Intent Contrastive Learning for Sequential Recommendation ; Users' interactions with items are driven by various intents e.g., preparing for holiday gifts, shopping for fishing equipment, etc..However, users' underlying intents are often unobservedlatent, making it challenging to leverage such latent intents forSequentialrecommendationSR. To investigate the benefits of latent intents and leverage them effectively for recommendation, we proposeIntentContrastiveLearningICL, a general learning paradigm that leverages a latent intent variable into SR. The core idea is to learn users' intent distribution functions from unlabeled user behavior sequences and optimize SR models with contrastive selfsupervised learning SSL by considering the learned intents to improve recommendation. Specifically, we introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering. We propose to leverage the learned intents into SR models via contrastive SSL, which maximizes the agreement between a view of sequence and its corresponding intent. The training is alternated between intent representation learning and the SR model optimization steps within the generalized expectationmaximization EM framework. Fusing user intent information into SR also improves model robustness. Experiments conducted on four realworld datasets demonstrate the superiority of the proposed learning paradigm, which improves performance, and robustness against data sparsity and noisy interaction issues.
A Graph Neural Network Framework for GridBased Simulation ; Reservoir simulations are computationally expensive in the well control and well placement optimization. Generally, numerous simulation runs realizations are needed in order to achieve the optimal well locations. In this paper, we propose a graph neural network GNN framework to build a surrogate feedforward model which replaces simulation runs to accelerate the optimization process. Our GNN framework includes an encoder, a process, and a decoder which takes input from the processed graph data designed and generated from the simulation raw data. We train the GNN model with 6000 samples equivalent to 40 well configurations with each containing the previous step state variable and the next step state variable. We test the GNN model with another 6000 samples and after model tuning, both onestep prediction and rollout prediction achieve a close match with the simulation results. Our GNN framework shows great potential in the application of wellrelated subsurface optimization including oil and gas as well as carbon capture sequestration CCS.
Hardness of NoiseFree Learning for TwoHiddenLayer Neural Networks ; We give superpolynomial statistical query SQ lower bounds for learning twohiddenlayer ReLU networks with respect to Gaussian inputs in the standard noisefree model. No general SQ lower bounds were known for learning ReLU networks of any depth in this setting previous SQ lower bounds held only for adversarial noise models agnostic learning or restricted models such as correlational SQ. Prior work hinted at the impossibility of our result Vempala and Wilmes showed that general SQ lower bounds cannot apply to any realvalued family of functions that satisfies a simple nondegeneracy condition. To circumvent their result, we refine a lifting procedure due to Daniely and Vardi that reduces Boolean PAC learning problems to Gaussian ones. We show how to extend their technique to other learning models and, in many wellstudied cases, obtain a more efficient reduction. As such, we also prove new cryptographic hardness results for PAC learning twohiddenlayer ReLU networks, as well as new lower bounds for learning constantdepth ReLU networks from label queries.
Thermodynamics of chromosome inversions and 100 million years of Lachancea evolution ; Gene sequences of a deme evolve over time as new chromosome inversions appear in a population via mutations, some of which will replace an existing sequence. The underlying biochemical processes that generates these and other mutations are governed by the laws of thermodynamics, although the connection between thermodynamics and the generation and propagation of mutations are often neglected. Here, chromosome inversions are modeled as a specific example of mutations in an evolving system. The thermodynamic concepts of chemical potential, energy, and temperature are linked to the input parameters that include inversion rate, recombination loss rate and deme size. An energy barrier to existing gene sequence replacement is a natural consequence of the model. Finally, the model calculations are compared to the observed chromosome inversion distribution of the Lachancea genus of yeast. The model introduced in this work should be applicable to other types of mutations in evolving systems.
Audio Visual SceneAware Dialog Generation with Transformerbased Video Representations ; There have been many attempts to build multimodal dialog systems that can respond to a question about given audiovisual information, and the representative task for such systems is the Audio Visual SceneAware Dialog AVSD. Most conventional AVSD models adopt the Convolutional Neural Network CNNbased video feature extractor to understand visual information. While a CNN tends to obtain both temporally and spatially local information, global information is also crucial for boosting video understanding because AVSD requires longterm temporal visual dependency and whole visual information. In this study, we apply the Transformerbased video feature that can capture both temporally and spatially global representations more efficiently than the CNNbased feature. Our AVSD model with its Transformerbased feature attains higher objective performance scores for answer generation. In addition, our model achieves a subjective score close to that of human answers in DSTC10. We observed that the Transformerbased visual feature is beneficial for the AVSD task because our model tends to correctly answer the questions that need a temporally and spatially broad range of visual information.
Diffusion Causal Models for Counterfactual Estimation ; We consider the task of counterfactual estimation from observational imaging data given a known causal structure. In particular, quantifying the causal effect of interventions for highdimensional data with neural networks remains an open challenge. Herein we propose DiffSCM, a deep structural causal model that builds on recent advances of generative energybased models. In our setting, inference is performed by iteratively sampling gradients of the marginal and conditional distributions entailed by the causal model. Counterfactual estimation is achieved by firstly inferring latent variables with deterministic forward diffusion, then intervening on a reverse diffusion process using the gradients of an anticausal predictor w.r.t the input. Furthermore, we propose a metric for evaluating the generated counterfactuals. We find that DiffSCM produces more realistic and minimal counterfactuals than baselines on MNIST data and can also be applied to ImageNet data. Code is available httpsgithub.comviossDiffSCM.
A quintessence dynamical dark energy model from ratio gravity ; Based on the work of ratio gravity developed in 2018, which postulates the deformation of the cross ratio to associate with the physical model of gravity, we develop a mechanism to generate dynamical dark energy a quintessence field coupled with gravity. Such model causes the dark energy behaving differently in early and late time universe. In the radiationdominatedera and matterdominatedera, the related analytical solutions of the quintessence field have an interesting property starting as a constant field, then oscillating as the universe expands. By Markov Chain Monte Carlo search of the parameter space with the local measurement Type Ia supernovae in the Bayesian framework, the probed range of H0 within 1sigma overlaps the H0 value inferred from Planck CMB dataset by LambdaCDM model.
ARIA Adversarially Robust Image Attribution for Content Provenance ; Image attribution matching an image back to a trusted source is an emerging tool in the fight against online misinformation. Deep visual fingerprinting models have recently been explored for this purpose. However, they are not robust to tiny input perturbations known as adversarial examples. First we illustrate how to generate valid adversarial images that can easily cause incorrect image attribution. Then we describe an approach to prevent imperceptible adversarial attacks on deep visual fingerprinting models, via robust contrastive learning. The proposed training procedure leverages training on ellinftybounded adversarial examples, it is conceptually simple and incurs only a small computational overhead. The resulting models are substantially more robust, are accurate even on unperturbed images, and perform well even over a database with millions of images. In particular, we achieve 91.6 standard and 85.1 adversarial recall under ellinftybounded perturbations on manipulated images compared to 80.1 and 0.0 from prior work. We also show that robustness generalizes to other types of imperceptible perturbations unseen during training. Finally, we show how to train an adversarially robust image comparator model for detecting editorial changes in matched images.
An Exact Consistent Tangent Stiffness Matrix for a Second Gradient Model for Porous Plastic Solids Derivation and Assessment ; It is well known that the use of a consistent tangent stiffness matrix is critical to obtain quadratic convergence of the global Newton iterations in the finite element simulations of problems involving elastoplastic deformation of metals, especially for large scale metallic structure problems. In this paper we derive an exact consistent stiffness matrix for a porous material model, the GLPD model developed by Gologanu, Leblond, Perrin, and Devaux for ductile fracture for porous metals based on generalized continuum mechanics assumptions. Full expressions for the derivatives of the Cauchy stress tensor and the generalized moments stress tensor the model involved are provided. The effectiveness and robustness of the proposed tangent stiffness moduli are assessed by applying the formulation in the finite element simulations of ductile fracture problems. Comparisons between the performance our stiffness matrix and the standard ones are also provided.
Variational Interpretable Learning from Multiview Data ; The main idea of canonical correlation analysis CCA is to map different views onto a common latent space with maximum correlation. We propose a deep interpretable variational canonical correlation analysis DICCA for multiview learning. The developed model extends the existing latent variable model for linear CCA to nonlinear models through the use of deep generative networks. DICCA is designed to disentangle both the shared and viewspecific variations for multiview data. To further make the model more interpretable, we place a sparsityinducing prior on the latent weight with a structured variational autoencoder that is comprised of viewspecific generators. Empirical results on realworld datasets show that our methods are competitive across domains.
Learning Conditional Variational Autoencoders with Missing Covariates ; Conditional variational autoencoders CVAEs are versatile deep generative models that extend the standard VAE framework by conditioning the generative model with auxiliary covariates. The original CVAE model assumes that the data samples are independent, whereas more recent conditional VAE models, such as the Gaussian process GP prior VAEs, can account for complex correlation structures across all data samples. While several methods have been proposed to learn standard VAEs from partially observed datasets, these methods fall short for conditional VAEs. In this work, we propose a method to learn conditional VAEs from datasets in which auxiliary covariates can contain missing values as well. The proposed method augments the conditional VAEs with a prior distribution for the missing covariates and estimates their posterior using amortised variational inference. At training time, our method marginalises the uncertainty associated with the missing covariates while simultaneously maximising the evidence lower bound. We develop computationally efficient methods to learn CVAEs and GP prior VAEs that are compatible with minibatching. Our experiments on simulated datasets as well as on a clinical trial study show that the proposed method outperforms previous methods in learning conditional VAEs from nontemporal, temporal, and longitudinal datasets.
Classifications of magnetized T4 and T4Z2 orbifold models ; We study constructions and classifications of threegeneration models based on magnetized T4 and T4Z2 orbifold as candidates of the compact space. We focus on chiral fermion zeromode wave functions in the extra dimensions. Freedoms of constant gauge fields, called ScherkSchwarz phases are taken into account. Infinite number of threegeneration models are yielded, corresponding to the ways in which the magnetic flux can be turned on. We classify them in a systematic manner, clarifying the relationship between different models. The Higgs sector is also studied by analyzing possible assignments of the magnetic flux and ScherkSchwarz phases, etc. to left and righthanded fermions.
Stepwise Feature Fusion Local Guides Global ; Colonoscopy, currently the most efficient and recognized colon polyp detection technology, is necessary for early screening and prevention of colorectal cancer. However, due to the varying size and complex morphological features of colonic polyps as well as the indistinct boundary between polyps and mucosa, accurate segmentation of polyps is still challenging. Deep learning has become popular for accurate polyp segmentation tasks with excellent results. However, due to the structure of polyps image and the varying shapes of polyps, it easy for existing deep learning models to overfitting the current dataset. As a result, the model may not process unseen colonoscopy data. To address this, we propose a new StateOfTheArt model for medical image segmentation, the SSFormer, which uses a pyramid Transformer encoder to improve the generalization ability of models. Specifically, our proposed Progressive Locality Decoder can be adapted to the pyramid Transformer backbone to emphasize local features and restrict attention dispersion. The SSFormer achieves statetoftheart performance in both learning and generalization assessment.
Detection of AI Synthesized Hindi Speech ; The recent advancements in generative artificial speech models have made possible the generation of highly realistic speech signals. At first, it seems exciting to obtain these artificially synthesized signals such as speech clones or deep fakes but if left unchecked, it may lead us to digital dystopia. One of the primary focus in audio forensics is validating the authenticity of a speech. Though some solutions are proposed for English speeches but the detection of synthetic Hindi speeches have not gained much attention. Here, we propose an approach for discrimination of AI synthesized Hindi speech from an actual human speech. We have exploited the Bicoherence Phase, Bicoherence Magnitude, Mel Frequency Cepstral Coefficient MFCC, Delta Cepstral, and Delta Square Cepstral as the discriminating features for machine learning models. Also, we extend the study to using deep neural networks for extensive experiments, specifically VGG16 and homemade CNN as the architecture models. We obtained an accuracy of 99.83 with VGG16 and 99.99 with homemade CNN models.
Understanding Iterative Revision from HumanWritten Text ; Writing is, by nature, a strategic, adaptive, and more importantly, an iterative process. A crucial part of writing is editing and revising the text. Previous works on text revision have focused on defining edit intention taxonomies within a single domain or developing computational models with a single level of edit granularity, such as sentencelevel edits, which differ from human's revision cycles. This work describes IteraTeR the first largescale, multidomain, editintention annotated corpus of iteratively revised text. In particular, IteraTeR is collected based on a new framework to comprehensively model the iterative text revisions that generalize to various domains of formal writing, edit intentions, revision depths, and granularities. When we incorporate our annotated edit intentions, both generative and editbased text revision models significantly improve automatic evaluations. Through our work, we better understand the text revision process, making vital connections between edit intentions and writing quality, enabling the creation of diverse corpora to support computational modeling of iterative text revisions.
Betweenness Approximation for Edge Computing with Hypergraph Neural Network ; Edge computing is highly demanded to achieve their full potentials Internet of Things IoT, since various IoT systems have been generating big data facilitating modern latencysensitive applications. As a basic problem, network dismantling tries to find an optimal set of nodes of which will maximize the connectivity degradation in a network. However, current approaches mainly focus on simple networks modeling only pairwise interactions between two nodes, while higher order groupwise interactions among arbitrary number of nodes are ubiquitous in real world which can be better modeled as hypernetwork. The structural difference between simple network and hypernetwork restricts the direct application of simple network dismantling methods to hypernetwork. Even though some hypernetwork centrality measures such as betweenness can be used for hypernetwork dismantling, they face the problem of balancing effectiveness and efficiency. Therefore, we propose a betweenness approximationbased hypernetwork dismantling method with hypergraph neural network, namely HND. HND trains a transferable hypergraph neural networkbased regression model on plenty of generated smallscale synthetic hypernetwork in a supervised way, and utilizes the welltrained model to approximate nodes' betweenness. Extensive experiments on five real hypernetworks demonstrate the effectiveness and efficiency of HND comparing with various baselines.
Diffusion Models for Medical Anomaly Detection ; In medical applications, weakly supervised anomaly detection methods are of great interest, as only imagelevel annotations are required for training. Current anomaly detection methods mainly rely on generative adversarial networks or autoencoder models. Those models are often complicated to train or have difficulties to preserve fine details in the image. We present a novel weakly supervised anomaly detection method based on denoising diffusion implicit models. We combine the deterministic iterative noising and denoising scheme with classifier guidance for imagetoimage translation between diseased and healthy subjects. Our method generates very detailed anomaly maps without the need for a complex training procedure. We evaluate our method on the BRATS2020 dataset for brain tumor detection and the CheXpert dataset for detecting pleural effusions.
ImageNetPatch A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches ; Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machinelearning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leading to suboptimal robustness evaluations. To overcome these issues, we propose ImageNetPatch, a dataset to benchmark machinelearning models against adversarial patches. It consists of a set of patches, optimized to generalize across different models, and readily applicable to ImageNet data after preprocessing them with affine transformations. This process enables an approximate yet faster robustness evaluation, leveraging the transferability of adversarial perturbations. We showcase the usefulness of this dataset by testing the effectiveness of the computed patches against 127 models. We conclude by discussing how our dataset could be used as a benchmark for robustness, and how our methodology can be generalized to other domains. We open source our dataset and evaluation code at httpsgithub.compralabImageNetPatch.
When SelfGenerated Gradients interact with Expansion by Cell Division and Diffusion. Analysis of a Minimal Model ; We investigate a minimal model for cell propagation involving migration along selfgenerated signaling gradients and cell division, which has been proposed in an earlier study. The model consists in a system of two coupled parabolic diffusionadvectionreaction equations. Because of a discontinuous advection term, the Cauchy problem should be handled with care. We first establish existence and uniqueness locally in time through the reduction of the problem to the wellposedness of an ODE, under a monotonicity condition on the signaling gradient. Then, we carry out an asymptotic analysis of the system. All positive and bounded traveling waves of the system are computed and an explicit formula for the minimal wave speed is deduced. An analysis on the inside dynamics of the wave establishes a dichotomy between pushed and pulled waves depending on the strength of the advection. We identified the minimal wave speed as the biologically relevant speed, in a weak sense, that is, the solution propagates slower, respectively faster, than the minimal wave speed, up to time extraction. Finally, we extend the study to a hyperbolic twovelocity model with persistence.
Productivity within the ETAS seismicity model ; The productivity of a magnitude m event can be characterized in term of triggered events of magnitude above mDelta it is the number of direct descendants nuDelta and the number of all descendants VDelta. There is evidence in favour of the discrete exponential distribution for both nuDelta and VDelta with a dominant magnitude m the case of aftershock cluster. We consider the general Epidemic Type Aftershock Sequence ETAS model adapted to any distribution of nuDelta. It turns out that the branching structure of the model excludes the possibility of having exponential distributions for both productivity characteristics at once. We have analytically investigated the features of the VDelta distribution within a wide class of ETAS models. We show the fundamental difference in tail behavior of the VDeltadistributions for generaltype clusters and clusters with a dominant initial magnitude the tail is heavy in the former case and light in the latter. The real data demonstrate the possibilities of this kind. This result provides theoretical and practical constraints for distributional analysis of VDelta.
Novel counterexample to the NelsonSeiberg theorem ; We present a new type of counterexample to the NelsonSeiberg theorem. It is a generic Rsymmetric WessZumino model with nine chiral superfields, including one field of Rcharge 2 and no Rcharge 0 field. As in previous counterexamples, the model gives a set of degenerate supersymmetric vacua with a nonzero expectation value for a pair of oppositely Rcharged fields. However, one of these fields appears quadratically in the superpotential, and many other fields with nonzero Rcharges gain nonzero expectation values at the vacuum, and so this model escapes the sufficient condition for counterexamples established in previous literature. Thus there are still open problems in the relation of Rsymmetries to supersymmetry breaking in generic models.
Addressing the Gravitational Wave Collider Inverse Problem ; We provide a roadmap for analyzing the interplay between hypothetical future collider observations and the detection of a gravitational wave signal produced by a strong first order electroweak phase transition in beyond the Standard Model theories. We rely on a combination of a dimensionally reduced, threedimensional effective field theory and results of both perturbation theory and nonperturbative lattice simulations. We apply these stateoftheart methods to the real scalar triplet extension of the Standard Model, which admits a possible twostep electroweak symmetrybreaking thermal history. We find that 1 a first order transition during the second step could generate a signal accessible to LISA generation detectors and 2 the gravitational wave signal displays a strong sensitivity to the portal coupling between the new scalar and the Higgs boson. We illustrate how a combination of direct and indirect measurements of the new scalar properties, in combination with the presence or absence of a gravitational wave detection, could test the model and identify the values of the model parameters.
Deep Class Incremental Learning from Decentralized Data ; In this paper, we focus on a new and challenging decentralized machine learning paradigm in which there are continuous inflows of data to be addressed and the data are stored in multiple repositories. We initiate the study of data decentralized classincremental learning DCIL by making the following contributions. Firstly, we formulate the DCIL problem and develop the experimental protocol. Secondly, we introduce a paradigm to create a basic decentralized counterpart of typical centralized classincremental learning approaches, and as a result, establish a benchmark for the DCIL study. Thirdly, we further propose a Decentralized Composite knowledge Incremental Distillation framework DCID to transfer knowledge from historical models and multiple local sites to the general model continually. DCID consists of three main components namely local classincremental learning, collaborated knowledge distillation among local models, and aggregated knowledge distillation from local models to the general one. We comprehensively investigate our DCID framework by using different implementations of the three components. Extensive experimental results demonstrate the effectiveness of our DCID framework. The codes of the baseline methods and the proposed DCIL will be released at httpsgithub.comzxxxxhDCIL.