text
stringlengths
62
2.94k
Reproducibility Report Path Planning using Neural A Search ; The following paper is a reproducibility report for Path Planning using Neural A Search published in ICML2 2021 as part of the ML Reproducibility Challenge 2021. The original paper proposes the Neural A planner, and claims it achieves an optimal balance between the reduction of node expansions and path accuracy. We verify this claim by reimplementing the model in a different framework and reproduce the data published in the original paper. We have also provided a codeflow diagram to aid comprehension of the code structure. As extensions to the original paper, we explore the effects of 1 generalizing the model by training it on a shuffled dataset, 2 introducing dropout, 3 implementing empirically chosen hyperparameters as trainable parameters in the model, 4 altering the network model to Generative Adversarial Networks GANs to introduce stochasticity, 5 modifying the encoder from Unet to Unet, 6 incorporating cost maps obtained from the Neural A module in other variations of A search.
Integral models of moduli spaces of shtukas with deep BruhatTits level structures ; We construct integral models for moduli spaces of shtukas with deep BruhatTits level structures. We embed the moduli space of global shtukas for a deep BruhatTits group scheme into the limit of the moduli spaces of shtukas for all associated parahoric group schemes. Its schematic image defines an integral model of the moduli space of shtukas with deep BruhatTits level with favourable properties They admit proper, surjective and generically 'etale level maps as well as a natural Newton stratification. In the Drinfeld case, this general construction of integral models recovers the moduli space of Drinfeld shtukas with Drinfeld level structures.
Towards Parametric Speech Synthesis Using GaussianMarkov Model of Spectral Envelope and WaveletBased Decomposition of F0 ; Neural networkbased TexttoSpeech has significantly improved the quality of synthesized speech. Prominent methods e.g., Tacotron2, FastSpeech, FastPitch usually generate Melspectrogram from text and then synthesize speech using vocoder e.g., WaveNet, WaveGlow, HiFiGAN. Compared with traditional parametric approaches e.g., STRAIGHT and WORLD, neural vocoder based endtoend models suffer from slow inference speed, and the synthesized speech is usually not robust and lack of controllability. In this work, we propose a novel updated vocoder, which is a simple signal model to train and easy to generate waveforms. We use the GaussianMarkov model toward robust learning of spectral envelope and waveletbased statistical signal processing to characterize and decompose F0 features. It can retain the fine spectral envelope and achieve high controllability of natural speech. The experimental results demonstrate that our proposed vocoder achieves better naturalness of reconstructed speech than the conventional STRAIGHT vocoder, slightly better than WaveNet, and somewhat worse than the WaveRNN.
CommitBART A Large Pretrained Model for GitHub Commits ; GitHub commits, which record the code changes with natural language messages for description, play a critical role for software developers to comprehend the software evolution. To promote the development of the opensource software community, we collect a commit benchmark including over 7.99 million commits across 7 programming languages. Based on this benchmark, we present CommitBART, a large pretrained encoderdecoder Transformer model for GitHub commits. The model is pretrained by three categories i.e., denoising objectives, crossmodal generation and contrastive learning for six pretraining tasks to learn commit fragment representations. Furthermore, we unify a commit intelligence'' framework with one understanding task and three generation tasks for commits. The comprehensive experiments on these tasks demonstrate that CommitBARTsignificantly outperforms previous pretrained works for code. Further analysis also reveals each pretraining task enhances the model performance.
An Adaptively Resized Parametric Bootstrap for Inference in Highdimensional Generalized Linear Models ; Accurate statistical inference in logistic regression models remains a critical challenge when the ratio between the number of parameters and sample size is not negligible. This is because approximations based on either classical asymptotic theory or bootstrap calculations are grossly off the mark. This paper introduces a resized bootstrap method to infer model parameters in arbitrary dimensions. As in the parametric bootstrap, we resample observations from a distribution, which depends on an estimated regression coefficient sequence. The novelty is that this estimate is actually far from the maximum likelihood estimate MLE. This estimate is informed by recent theory studying properties of the MLE in high dimensions, and is obtained by appropriately shrinking the MLE towards the origin. We demonstrate that the resized bootstrap method yields valid confidence intervals in both simulated and real data examples. Our methods extend to other highdimensional generalized linear models.
Image as a Foreign Language BEiT Pretraining for All Vision and VisionLanguage Tasks ; A big convergence of language, vision, and multimodal pretraining is emerging. In this work, we introduce a generalpurpose multimodal foundation model BEiT3, which achieves stateoftheart transfer performance on both vision and visionlanguage tasks. Specifically, we advance the big convergence from three aspects backbone architecture, pretraining task, and model scaling up. We introduce Multiway Transformers for generalpurpose modeling, where the modular architecture enables both deep fusion and modalityspecific encoding. Based on the shared backbone, we perform masked language modeling on images Imglish, texts English, and imagetext pairs parallel sentences in a unified manner. Experimental results show that BEiT3 obtains stateoftheart performance on object detection COCO, semantic segmentation ADE20K, image classification ImageNet, visual reasoning NLVR2, visual question answering VQAv2, image captioning COCO, and crossmodal retrieval Flickr30K, COCO.
NonHermitian quantum system generated from two coupled SachdevYeKitaev models ; We show that a nonHermitian two coupled SachdevYeKitaev SYK model can provide thermodynamic structure equivalent to Hermitian two coupled SYK model. The energy spectrum, the entanglement degree of the ground states and the low energy effective action of this model are not influenced by the nonHermiticity. The novel biorthogonal ground states demonstrates that two SYK sites, one of which can be in the ground state and the other in the Schwarzian excited state by tuning the nonHermiticity. We find evidence that the free energy is independent of the nonHermiticity.
A Provably Efficient ModelFree Posterior Sampling Method for Episodic Reinforcement Learning ; Thompson Sampling is one of the most effective methods for contextual bandits and has been generalized to posterior sampling for certain MDP settings. However, existing posterior sampling methods for reinforcement learning are limited by being modelbased or lack worstcase theoretical guarantees beyond linear MDPs. This paper proposes a new modelfree formulation of posterior sampling that applies to more general episodic reinforcement learning problems with theoretical guarantees. We introduce novel proof techniques to show that under suitable conditions, the worstcase regret of our posterior sampling method matches the best known results of optimization based methods. In the linear MDP setting with dimension, the regret of our algorithm scales linearly with the dimension as compared to a quadratic dependence of the existing posterior samplingbased exploration algorithms.
Improving on the MarkovSwitching Regression Model by the Use of an Adaptive Moving Average ; Regime detection is vital for the effective operation of trading and investment strategies. However, the most popular means of doing this, the twostate Markovswitching regression model MSR, is not an optimal solution, as two volatility states do not fully capture the complexity of the market. Past attempts to extend this model to a multistate MSR have proved unstable, potentially expensive in terms of trading costs, and can only divide the market into states with varying levels of volatility, which is not the only aspect of market dynamics relevant to trading. We demonstrate it is possible and valuable to instead segment the market into more than two states not on the basis of volatility alone, but on a combined basis of volatility and trend, by combining the twostate MSR with an adaptive moving average. A realistic trading framework is used to demonstrate that using two selected states from the four thus generated leads to better trading performance than traditional benchmarks, including the twostate MSR. In addition, the proposed model could serve as a label generator for machine learning tasks used in predicting financial regimes ex ante.
Unrestricted Blackbox Adversarial Attack Using GAN with Limited Queries ; Adversarial examples are inputs intentionally generated for fooling a deep neural network. Recent studies have proposed unrestricted adversarial attacks that are not normconstrained. However, the previous unrestricted attack methods still have limitations to fool realworld applications in a blackbox setting. In this paper, we present a novel method for generating unrestricted adversarial examples using GAN where an attacker can only access the top1 final decision of a classification model. Our method, LatentHSJA, efficiently leverages the advantages of a decisionbased attack in the latent space and successfully manipulates the latent vectors for fooling the classification model. With extensive experiments, we demonstrate that our proposed method is efficient in evaluating the robustness of classification models with limited queries in a blackbox setting. First, we demonstrate that our targeted attack method is queryefficient to produce unrestricted adversarial examples for a facial identity recognition model that contains 307 identities. Then, we demonstrate that the proposed method can also successfully attack a realworld celebrity recognition service.
Generalized relativistic smallcore pseudopotentials accounting for quantum electrodynamic effects construction and pilot applications ; A simple procedure to incorporate oneloop quantum electrodynamic QED corrections into the generalized Gatchina nonlocal shapeconsistent relativistic pseudopotential model is described. The pseudopotentials for Lu, Tl, and Ra replacing only inner core shells with principal quantum numbers nle 3 for the two former elements and nle 4 for the latter one are derived from the solutions of reference atomic SCF problems with the DiracCoulombBreit Hamiltonian to which the model Lamb shift operator added. QED contributions to atomic valence excitation energies evaluated at the SCF level are demonstrated to exceed the errors introduced by the pseudopotential approximation itself by an order of magnitude. Pilot applications of the new model to calculations of excitation energies of twovalenceelectron atomic systems using the intermediateHamiltonian relativistic Fock space coupled cluster method reformulated here for incomplete main model spaces are reported. Implications for highaccuracy molecular excited state calculations are discussed.
A General ModelBased Extended State Observer with BuiltIn Zero Dynamics ; A general modelbased extended state observer GMBESO is proposed for singleinput singleoutput linear timeinvariant systems with a given state space model, where the total disturbance, a lump sum of model uncertainties and external disturbances, is defined as an extended state in the same manner as in the original formulation of ESO. The conditions for the existence of such an observer, however, are shown for the first time as 1 the original plant is observable; and 2 there is no invariant zero between the plant output and the total disturbance. Then, the finitestep convergence and error characteristics of GMBESO are shown by exploiting its inherent connection to the wellknown unknown input observer UIO. Furthermore, it is shown that, with the relative degree of the plant greater than one and the observer eigenvalues all placed at the origin, GMBESO produces the identical disturbance estimation as that of UIO. Finally, an improved GMBESO with builtin zero dynamics is proposed for those plants with zero dynamics, which is a problem that has not been addressed in all existing ESO designs.
Graph Zeta Functions and Wilson Loops in KazakovMigdal Model ; In this paper, we consider an extended KazakovMigdal model defined on an arbitrary graph. The partition function of the model, which is expressed as the summation of all Wilson loops on the graph, turns out to be represented by the Bartholdi zeta function weighted by unitary matrices on the edges of the graph. The partition function on the cycle graph at finite N is expressed by the generating function of the generalized Catalan numbers. The partition function on an arbitrary graph can be exactly evaluated at large N which is expressed as an infinite product of a kind of deformed Ihara zeta function. The nonzero area Wilson loops do not contribute to the leading part of the 1Nexpansion of the free energy but to the next leading. The semicircle distribution of the eigenvalues of the scalar fields is still an exact solution of the model at large N on an arbitrary regular graph, but it reflects only zeroarea Wilson loops.
Spinpolarized hot electron transport versus spin pumping mediated by local heating ; A toy model' aimed at capturing the essential physics is presented that jointly describes spinpolarized hot electron transport and spin pumping driven by local heating. These two processes both contribute to spincurrent generation in laserexcited magnetic heterostructures. The model is used to compare the two contributions directly. The spinpolarized hot electron current is modeled as one generation of hot electrons with a spindependent excitation and relaxation scheme. Upon decay, the excess energy of the hot electrons is transferred to a thermalized electron bath. The elevated electron temperature leads to an increased rate of electronmagnon scattering processes and yields a local accumulation of spin. This process is dubbed as spin pumping by local heating. The builtup spin accumulation is effectively driven out of the ferromagnetic system by interfacial electron transport. Within our model, the injected spin current is dominated by the contribution resulting from spin pumping, while the hot electron spin current remains relatively small. We derive that this observation is related to the ratio between the Fermi temperature and Curie temperature, and we show what other fundamental parameters play a role.
Optimizing the Performative Risk under Weak Convexity Assumptions ; In performative prediction, a predictive model impacts the distribution that generates future data, a phenomenon that is being ignored in classical supervised learning. In this closedloop setting, the natural measure of performance named performative risk mathrmPR, captures the expected loss incurred by a predictive model emphafter deployment. The core difficulty of using the performative risk as an optimization objective is that the data distribution itself depends on the model parameters. This dependence is governed by the environment and not under the control of the learner. As a consequence, even the choice of a convex loss function can result in a highly nonconvex mathrmPR minimization problem. Prior work has identified a pair of general conditions on the loss and the mapping from model parameters to distributions that implies the convexity of the performative risk. In this paper, we relax these assumptions and focus on obtaining weaker notions of convexity, without sacrificing the amenability of the mathrmPR minimization problem for iterative optimization methods.
Model Selection in HighDimensional BlockSparse Linear Regression ; Model selection is an indispensable part of data analysis dealing very frequently with fitting and prediction purposes. In this paper, we tackle the problem of model selection in a general linear regression where the parameter matrix possesses a blocksparse structure, i.e., the nonzero entries occur in clusters or blocks and the number of such nonzero blocks is very small compared to the parameter dimension. Furthermore, a highdimensional setting is considered where the parameter dimension is quite large compared to the number of available measurements. To perform model selection in this setting, we present an information criterion that is a generalization of the Extended Bayesian Information CriterionRobust EBICR and it takes into account both the block structure and the highdimensionality scenario. The analytical steps for deriving the EBICR for this setting are provided. Simulation results show that the proposed method performs considerably better than the existing stateoftheart methods and achieves empirical consistency at large sample sizes andor at highSNR.
Data needs for integrated economicepidemiological models of pandemic mitigation policies ; The COVID19 pandemic and the mitigation policies implemented in response to it have resulted in economic losses worldwide. Attempts to understand the relationship between economics and epidemiology has lead to a new generation of integrated mathematical models. The data needs for these models transcend those of the individual fields, especially where human interaction patterns are closely linked with economic activity. In this article, we reflect upon modelling efforts to date, discussing the data needs that they have identified, both for understanding the consequences of the pandemic and policy responses to it through analysis of historic data and for the further development of this new and exciting interdisciplinary field.
MultiDocument Scientific Summarization from a Knowledge GraphCentric View ; MultiDocument Scientific Summarization MDSS aims to produce coherent and concise summaries for clusters of topicrelevant scientific papers. This task requires precise understanding of paper content and accurate modeling of crosspaper relationships. Knowledge graphs convey compact and interpretable structured information for documents, which makes them ideal for content modeling and relationship modeling. In this paper, we present KGSum, an MDSS model centred on knowledge graphs during both the encoding and decoding process. Specifically, in the encoding process, two graphbased modules are proposed to incorporate knowledge graph information into paper encoding, while in the decoding process, we propose a twostage decoder by first generating knowledge graph information of summary in the form of descriptive sentences, followed by generating the final summary. Empirical results show that the proposed architecture brings substantial improvements over baselines on the MultiXscience dataset.
DoubleMix Simple InterpolationBased Data Augmentation for Text Classification ; This paper proposes a simple yet effective interpolationbased data augmentation approach termed DoubleMix, to improve the robustness of models in text classification. DoubleMix first leverages a couple of simple augmentation operations to generate several perturbed samples for each training data, and then uses the perturbed data and original data to carry out a twostep interpolation in the hidden space of neural models. Concretely, it first mixes up the perturbed data to a synthetic sample and then mixes up the original data and the synthetic perturbed data. DoubleMix enhances models' robustness by learning the shifted features in hidden space. On six text classification benchmark datasets, our approach outperforms several popular text augmentation methods including tokenlevel, sentencelevel, and hiddenlevel data augmentation techniques. Also, experiments in lowresource settings show our approach consistently improves models' performance when the training data is scarce. Extensive ablation studies and case studies confirm that each component of our approach contributes to the final performance and show that our approach exhibits superior performance on challenging counterexamples. Additionally, visual analysis shows that text features generated by our approach are highly interpretable. Our code for this paper can be found at httpsgithub.comdeclarelabDoubleMix.git.
Prompt Combines Paraphrase Teaching Pretrained Models to Understand Rare Biomedical Words ; Promptbased finetuning for pretrained models has proven effective for many natural language processing tasks under fewshot settings in general domain. However, tuning with prompt in biomedical domain has not been investigated thoroughly. Biomedical words are often rare in general domain, but quite ubiquitous in biomedical contexts, which dramatically deteriorates the performance of pretrained models on downstream biomedical applications even after finetuning, especially in lowresource scenarios. We propose a simple yet effective approach to helping models learn rare biomedical words during tuning with prompt. Experimental results show that our method can achieve up to 6 improvement in biomedical natural language inference task without any extra parameters or training steps using fewshot vanilla prompt settings.
Machine Reading, Fast and Slow When Do Models Understand Language ; Two of the most fundamental challenges in Natural Language Understanding NLU at present are a how to establish whether deep learningbased models score highly on NLU benchmarks for the 'right' reasons; and b to understand what those reasons would even be. We investigate the behavior of reading comprehension models with respect to two linguistic 'skills' coreference resolution and comparison. We propose a definition for the reasoning steps expected from a system that would be 'reading slowly', and compare that with the behavior of five models of the BERT family of various sizes, observed through saliency scores and counterfactual explanations. We find that for comparison but not coreference the systems based on larger encoders are more likely to rely on the 'right' information, but even they struggle with generalization, suggesting that they still learn specific lexical patterns rather than the general principles of comparison.
Detecting Generated Scientific Papers using an Ensemble of Transformer Models ; The paper describes neural models developed for the DAGPap22 shared task hosted at the Third Workshop on Scholarly Document Processing. This shared task targets the automatic detection of generated scientific papers. Our work focuses on comparing different transformerbased models as well as using additional datasets and techniques to deal with imbalanced classes. As a final submission, we utilized an ensemble of SciBERT, RoBERTa, and DeBERTa finetuned using random oversampling technique. Our model achieved 99.24 in terms of F1score. The official evaluation results have put our system at the third place.
Dimension matters when modeling network communities in hyperbolic spaces ; Over the last decade, random hyperbolic graphs have proved successful in providing geometric explanations for many key properties of realworld networks, including strong clustering, high navigability, and heterogeneous degree distributions. These properties are ubiquitous in systems as varied as the internet, transportation, brain or epidemic networks, which are thus unified under the hyperbolic network interpretation on a surface of constant negative curvature. Although a few studies have shown that hyperbolic models can generate community structures, another salient feature observed in real networks, we argue that the current models are overlooking the choice of the latent space dimensionality that is required to adequately represent clustered networked data. We show that there is an important qualitative difference between the lowestdimensional model and its higherdimensional counterparts with respect to how similarity between nodes restricts connection probabilities. Since more dimensions also increase the number of nearest neighbors for angular clusters representing communities, considering only one more dimension allows us to generate more realistic and diverse community structures.
Homophone Reveals the Truth A Reality Check for Speech2Vec ; Generating spoken word embeddings that possess semantic information is a fascinating topic. Compared with textbased embeddings, they cover both phonetic and semantic characteristics, which can provide richer information and are potentially helpful for improving ASR and speech translation systems. In this paper, we review and examine the authenticity of a seminal work in this field Speech2Vec. First, a homophonebased inspection method is proposed to check the speech embeddings released by the author of Speech2Vec. There is no indication that these embeddings are generated by the Speech2Vec model. Moreover, through further analysis of the vocabulary composition, we suspect that a textbased model fabricates these embeddings. Finally, we reproduce the Speech2Vec model, referring to the official code and optimal settings in the original paper. Experiments showed that this model failed to learn effective semantic embeddings. In word similarity benchmarks, it gets a correlation score of 0.08 in MEN and 0.15 in WS353SIM tests, which is over 0.5 lower than those described in the original paper. Our data and code are available.
Sampling is as easy as learning the score theory for diffusion models with minimal data assumptions ; We provide theoretical convergence guarantees for scorebased generative models SGMs such as denoising diffusion probabilistic models DDPMs, which constitute the backbone of largescale realworld generative models such as DALLcdotE 2. Our main result is that, assuming accurate score estimates, such SGMs can efficiently sample from essentially any realistic data distribution. In contrast to prior works, our results 1 hold for an L2accurate score estimate rather than Linftyaccurate; 2 do not require restrictive functional inequality conditions that preclude substantial nonlogconcavity; 3 scale polynomially in all relevant problem parameters; and 4 match stateoftheart complexity guarantees for discretization of the Langevin diffusion, provided that the score error is sufficiently small. We view this as strong theoretical justification for the empirical success of SGMs. We also examine SGMs based on the critically damped Langevin diffusion CLD. Contrary to conventional wisdom, we provide evidence that the use of the CLD does not reduce the complexity of SGMs.
JPEG Artifact Correction using Denoising Diffusion Restoration Models ; Diffusion models can be used as learned priors for solving various inverse problems. However, most existing approaches are restricted to linear inverse problems, limiting their applicability to more general cases. In this paper, we build upon Denoising Diffusion Restoration Models DDRM and propose a method for solving some nonlinear inverse problems. We leverage the pseudoinverse operator used in DDRM and generalize this concept for other measurement operators, which allows us to use pretrained unconditional diffusion models for applications such as JPEG artifact correction. We empirically demonstrate the effectiveness of our approach across various quality factors, attaining performance levels that are on par with stateoftheart methods trained specifically for the JPEG restoration task.
Moral Mimicry Large Language Models Produce Moral Rationalizations Tailored to Political Identity ; Large Language Models LLMs have demonstrated impressive capabilities in generating fluent text, as well as tendencies to reproduce undesirable social biases. This study investigates whether LLMs reproduce the moral biases associated with political groups in the United States, an instance of a broader capability herein termed moral mimicry. This hypothesis is explored in the GPT33.5 and OPT families of Transformerbased LLMs. Using tools from Moral Foundations Theory, it is shown that these LLMs are indeed moral mimics. When prompted with a liberal or conservative political identity, the models generate text reflecting corresponding moral biases. This study also explores the relationship between moral mimicry and model size, and similarity between human and LLM moral word use.
News Summarization and Evaluation in the Era of GPT3 ; The recent success of prompting large language models like GPT3 has led to a paradigm shift in NLP research. In this paper, we study its impact on text summarization, focusing on the classic benchmark domain of news summarization. First, we investigate how GPT3 compares against finetuned models trained on large summarization datasets. We show that not only do humans overwhelmingly prefer GPT3 summaries, prompted using only a task description, but these also do not suffer from common datasetspecific issues such as poor factuality. Next, we study what this means for evaluation, particularly the role of gold standard test sets. Our experiments show that both referencebased and referencefree automatic metrics cannot reliably evaluate GPT3 summaries. Finally, we evaluate models on a setting beyond generic summarization, specifically keywordbased summarization, and show how dominant finetuning approaches compare to prompting. To support further research, we release a a corpus of 10K generated summaries from finetuned and promptbased models across 4 standard summarization benchmarks, b 1K human preference judgments comparing different systems for generic and keywordbased summarization.
Spectral Diffusion Processes ; Scorebased generative modelling SGM has proven to be a very effective method for modelling densities on finitedimensional spaces. In this work we propose to extend this methodology to learn generative models over functional spaces. To do so, we represent functional data in spectral space to dissociate the stochastic part of the processes from their spacetime part. Using dimensionality reduction techniques we then sample from their stochastic component using finite dimensional SGM. We demonstrate our method's effectiveness for modelling various multimodal datasets.
Generalized Kernel Regularized Least Squares ; Kernel Regularized Least Squares KRLS is a popular method for flexibly estimating models that may have complex relationships between variables. However, its usefulness to many researchers is limited for two reasons. First, existing approaches are inflexible and do not allow KRLS to be combined with theoreticallymotivated extensions such as random effects, unregularized fixed effects, or nonGaussian outcomes. Second, estimation is extremely computationally intensive for even modestly sized datasets. Our paper addresses both concerns by introducing generalized KRLS gKRLS. We note that KRLS can be reformulated as a hierarchical model thereby allowing easy inference and modular model construction where KRLS can be used alongside random effects, splines, and unregularized fixed effects. Computationally, we also implement random sketching to dramatically accelerate estimation while incurring a limited penalty in estimation quality. We demonstrate that gKRLS can be fit on datasets with tens of thousands of observations in under one minute. Further, stateoftheart techniques that require fitting the model over a dozen times e.g. metalearners can be estimated quickly.
Mixedeffects locationscale model based on generalized hyperbolic distribution ; Motivated by better modeling of intraindividual variability in longitudinal data, we propose a class of locationscale mixed effects models, in which the data of each individual is modeled by a parametervarying generalized hyperbolic distribution. We first study the local maximumlikelihood asymptotics and reveal the instability in the numerical optimization of the loglikelihood. Then, we construct an asymptotically efficient estimator based on the NewtonRaphson method based on the original loglikelihood function with the initial estimator being naive leastsquarestype. Numerical experiments are conducted to show that the proposed onestep estimator is not only theoretically efficient but also numerically much more stable and much less timeconsuming compared with the maximumlikelihood estimator.
EiHi Net OutofDistribution Generalization Paradigm ; This paper develops a new EiHi net to solve the outofdistribution OoD generalization problem in deep learning. EiHi net is a model learning paradigm that can be blessed on any visual backbone. This paradigm can change the previous learning method of the deep model, namely find out correlations between inductive sample features and corresponding categories, which suffers from pseudo correlations between indecisive features and labels. We fuse SimCLR and VICReg via explicitly and dynamically establishing the original positive negative sample pair as a minimal learning element, the deep model iteratively establishes a relationship close to the causal one between features and labels, while suppressing pseudo correlations. To further validate the proposed model, and strengthen the established causal relationships, we develop a humanintheloop strategy, with few guidance samples, to prune the representation space directly. Finally, it is shown that the developed EiHi net makes significant improvements in the most difficult and typical OoD dataset Nico, compared with the current SOTA results, without any domain e.g. background, irrelevant features information.
Traveling wave solutions for nonNewtonian foam flow in porous media ; The injection and insitu generation of foam in porous media successfully control gas mobility and improve the fluids' sweep efficiency inside porous media. Mathematical models describing this problem use two phases, foamed gas and fluid, and usually have a term for foam generation and destruction. Moreover, the nonNewtonian foam behavior is frequently modeled using the Hirasaki and Lawson's formula for foamed gas viscosity. In this paper, we detail how the traveling wave analysis can be used to estimate the propagation profiles and velocity for a range of nonNewtonian foam models in porous media at constant total superficial flow velocity. We reformulate Hirasaki and Lawson's formula in an explicit form allowing us to find traveling wave solutions for the nonNewtonian Linear Kinetic model. Comparing the solution with the one for the Newtonian version, allows us to analyze qualitatively and quantitatively the rheology of the foam flow in porous media.
An empirical study of weakly supervised audio tagging embeddings for general audio representations ; We study the usability of pretrained weakly supervised audio tagging AT models as feature extractors for general audio representations. We mainly analyze the feasibility of transferring those embeddings to other tasks within the speech and sound domains. Specifically, we benchmark weakly supervised pretrained models MobileNetV2 and EfficientNetB0 against modern selfsupervised learning methods BYOLA as feature extractors. Fourteen downstream tasks are used for evaluation ranging from music instrument classification to language classification. Our results indicate that AT pretrained models are an excellent transfer learning choice for music, event, and emotion recognition tasks. Further, finetuning AT models can also benefit speechrelated tasks such as keyword spotting and intent classification.
Using Knowledge Distillation to improve interpretable models in a retail banking context ; This article sets forth a review of knowledge distillation techniques with a focus on their applicability to retail banking contexts. Predictive machine learning algorithms used in banking environments, especially in risk and control functions, are generally subject to regulatory and technical constraints limiting their complexity. Knowledge distillation gives the opportunity to improve the performances of simple models without burdening their application, using the results of other generally more complex and betterperforming models. Parsing recent advances in this field, we highlight three main approaches Soft Targets, Sample Selection and Data Augmentation. We assess the relevance of a subset of such techniques by applying them to open source datasets, before putting them to the test on the use cases of BPCE, a major French institution in the retail banking sector. As such, we demonstrate the potential of knowledge distillation to improve the performance of these models without altering their form and simplicity.
Medical Image Understanding with Pretrained Vision Language Models A Comprehensive Study ; The largescale pretrained vision language models VLM have shown remarkable domain transfer capability on natural images. However, it remains unknown whether this capability can also apply to the medical image domain. This paper thoroughly studies the knowledge transferability of pretrained VLMs to the medical domain, where we show that welldesigned medical prompts are the key to elicit knowledge from pretrained VLMs. We demonstrate that by prompting with expressive attributes that are shared between domains, the VLM can carry the knowledge across domains and improve its generalization. This mechanism empowers VLMs to recognize novel objects with fewer or without image samples. Furthermore, to avoid the laborious manual designing process, we develop three approaches for automatic generation of medical prompts, which can inject expertlevel medical knowledge and imagespecific information into the prompts for finegrained grounding. We conduct extensive experiments on thirteen different medical datasets across various modalities, showing that our welldesigned prompts greatly improve the zeroshot performance compared to the default prompts, and our finetuned models surpass the supervised models by a significant margin.
Dynamical consistency conditions for rapid turn inflation ; We derive consistency conditions for sustained slow roll and rapid turn inflation in twofield cosmological models with oriented scalar field space, which imply that inflationary models with fieldspace trajectories of this type are nongeneric. In particular, we show that third order adiabatic slow roll, together with large and slowly varying turn rate, requires the scalar potential of the model to satisfy a certain nonlinear second order PDE, whose coefficients depend on the scalar field metric. We also derive consistency conditions for slow roll inflationary solutions in the so called rapid turn attractor'' approximation, as well as study the consistency conditions for circular rapid turn trajectories with slow roll in twofield models with rotationally invariant field space metric. Finally, we argue that the rapid turn regime tends to have a natural exit after a limited number of efolds.
Neural Causal Models for Counterfactual Identification and Estimation ; Evaluating hypothetical statements about how the world would be had a different course of action been taken is arguably one key capability expected from modern AI systems. Counterfactual reasoning underpins discussions in fairness, the determination of blame and responsibility, credit assignment, and regret. In this paper, we study the evaluation of counterfactual statements through neural models. Specifically, we tackle two causal problems required to make such evaluations, i.e., counterfactual identification and estimation from an arbitrary combination of observational and experimental data. First, we show that neural causal models NCMs are expressive enough and encode the structural constraints necessary for performing counterfactual reasoning. Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions. We show that this algorithm is sound and complete for deciding counterfactual identification in general settings. Third, considering the practical implications of these results, we introduce a new strategy for modeling NCMs using generative adversarial networks. Simulations corroborate with the proposed methodology.
Application of Deep Q Learning with Simulation Results for Elevator Optimization ; This paper presents a methodology for combining programming and mathematics to optimize elevator wait times. Based on simulated user data generated according to the canonical threepeak model of elevator traffic, we first develop a naive model from an intuitive understanding of the logic behind elevators. We take into consideration a general array of features including capacity, acceleration, and maximum wait time thresholds to adequately model realistic circumstances. Using the same evaluation framework, we proceed to develop a Deep Q Learning model in an attempt to match the hardcoded naive approach for elevator control. Throughout the majority of the paper, we work under a Markov Decision Process MDP schema, but later explore how the assumption fails to characterize the highly stochastic overall Elevator Group Control System EGCS.
Combined Dynamic Virtual Spatiotemporal Graph Mapping for Traffic Prediction ; The continuous expansion of the urban construction scale has recently contributed to the demand for the dynamics of traffic intersections that are managed, making adaptive modellings become a hot topic. Existing deep learning methods are powerful to fit complex heterogeneous graphs. However, they still have drawbacks, which can be roughly classified into two categories, 1 spatiotemporal asyncmodelling approaches separately consider temporal and spatial dependencies, resulting in weak generalization and large instability while aggregating; 2 spatiotemporal syncmodelling is hard to capture longterm temporal dependencies because of the local receptive field. In order to overcome above challenges, a textbfCombined textbfDynamic textbfVirtual spatiotemporal textbfGraph textbfMapping textbfCDVGM is proposed in this work. The contributions are the following 1 a dynamic virtual graph Laplacian DVGL is designed, which considers both the spatial signal passing and the temporal features simultaneously; 2 the Longterm Temporal Strengthen model LT2S for improving the stability of time series forecasting; Extensive experiments demonstrate that CDVGM has excellent performances of fast convergence speed and low resource consumption and achieves the current SOTA effect in terms of both accuracy and generalization. The code is available at hyperlinkhttpsgithub.comDandelionymCDVGM.httpsgithub.comDandelionymCDVGM.
ContraCLM Contrastive Learning For Causal Language Model ; Despite exciting progress in causal language models, the expressiveness of the representations is largely limited due to poor discrimination ability. To remedy this issue, we present ContraCLM, a novel contrastive learning framework at both tokenlevel and sequencelevel. We assess ContraCLM on a variety of downstream tasks. We show that ContraCLM enhances discrimination of the representations and bridges the gap with the encoderonly models, which makes causal language models better suited for tasks beyond language generation. Specifically, we attain 44 relative improvement on the Semantic Textual Similarity tasks and 34 on CodetoCode Search tasks. Furthermore, by improving the expressiveness of the representations, ContraCLM also boosts the source code generation capability with 9 relative improvement on execution accuracy on the HumanEval benchmark.
No, they did not Dialogue response dynamics in pretrained language models ; A critical component of competence in language is being able to identify relevant components of an utterance and reply appropriately. In this paper we examine the extent of such dialogue response sensitivity in pretrained language models, conducting a series of experiments with a particular focus on sensitivity to dynamics involving phenomena of atissueness and ellipsis. We find that models show clear sensitivity to a distinctive role of embedded clauses, and a general preference for responses that target main clause content of prior utterances. However, the results indicate mixed and generally weak trends with respect to capturing the full range of dynamics involved in targeting atissue versus notatissue content. Additionally, models show fundamental limitations in grasp of the dynamics governing ellipsis, and response selections show clear interference from superficial factors that outweigh the influence of principled discourse constraints.
The Local to Unity Dynamic Tobit Model ; This paper considers highly persistent time series that are subject to nonlinearities in the form of censoring or an occasionally binding constraint, such as are regularly encountered in macroeconomics. A tractable candidate model for such series is the dynamic Tobit with a root local to unity. We show that this model generates a process that converges weakly to a nonstandard limiting process, that is constrained regulated to be positive, and derive the limiting distributions of the OLS estimators of the model parameters. This allows inferences to be drawn on the overall persistence of the process as measured by the sum of the autoregressive coefficients, and for the null of a unit root to be tested in the presence of censoring. Our simulations illustrate that the conventional ADF test substantially overrejects when the data is generated by a dynamic Tobit with a unit root. We provide an application of our methods to testing for a unit root in the Swiss franc euro exchange rate, during a period when this was subject to an occasionally binding lower bound.
Towards the Multiple Constant Multiplication at Minimal Hardware Cost ; Multiple Constant Multiplication MCM over integers is a frequent operation arising in embedded systems that require highly optimized hardware. An efficient way is to replace costly generic multiplication by bitshifts and additions, i.e. a multiplierless circuit. In this work, we improve the stateoftheart optimal approach for MCM, based on Integer Linear Programming ILP. We introduce a new lowerlevel hardware cost, based on counting the number of onebit adders and demonstrate that it is strongly correlated with the LUT count. This new model for the multiplierless MCM circuits permitted us to consider intermediate truncations that permit to significantly save resources when a full output precision is not required. We incorporate the error propagation rules into our ILP model to guarantee a usergiven error bound on the MCM results. The proposed ILP models for multiple flavors of MCM are implemented as an opensource tool and, combined with the FloPoCo code generator, provide a complete coefficienttoVHDL flow. We evaluate our models in extensive experiments, and propose an indepth analysis of the impact that design metrics have on actually synthesized hardware.
Learning Disentangled Representations for Natural Language Definitions ; Disentangling the encodings of neural models is a fundamental aspect for improving interpretability, semantic control and downstream task performance in Natural Language Processing. Currently, most disentanglement methods are unsupervised or rely on synthetic datasets with known generative factors. We argue that recurrent syntactic and semantic regularities in textual data can be used to provide the models with both structural biases and generative factors. We leverage the semantic structures present in a representative and semantically dense category of sentence types, definitional sentences, for training a Variational Autoencoder to learn disentangled representations. Our experimental results show that the proposed model outperforms unsupervised baselines on several qualitative and quantitative benchmarks for disentanglement, and it also improves the results in the downstream task of definition modeling.
Augmentor or Filter Reconsider the Role of Pretrained Language Model in Text Classification Augmentation ; Text augmentation is one of the most effective techniques to solve the critical problem of insufficient data in text classification. Existing text augmentation methods achieve hopeful performance in fewshot text data augmentation. However, these methods usually lead to performance degeneration on public datasets due to poor quality augmentation instances. Our study shows that even employing pretrained language models, existing text augmentation methods generate numerous lowquality instances and lead to the feature space shift problem in augmentation instances. However, we note that the pretrained language model is good at finding lowquality instances provided that it has been finetuned on the target dataset. To alleviate the feature space shift and performance degeneration in existing text augmentation methods, we propose BOOSTAUG, which reconsiders the role of the language model in text augmentation and emphasizes the augmentation instance filtering rather than generation. We evaluate BOOSTAUG on both sentencelevel text classification and aspectbased sentiment classification. The experimental results on seven commonly used text classification datasets show that our augmentation method obtains stateoftheart performance. Moreover, BOOSTAUG is a flexible framework; we release the code which can help improve existing augmentation methods.
Prediction of DrugInduced TdP Risks Using Machine Learning and Rabbit Ventricular Wedge Assay ; Torsades de pointes TdP is an irregular heart rhythm as a side effect of drugs and may cause sudden cardiac death. A machine learning model that can accurately identify drug TdP risk is necessary. This study uses multinomial logistic regression models to predict threeclass drug TdP risks based on datasets generated from rabbit ventricular wedge assay experiments. The trainingtest split and fivefold crossvalidation provide unbiased measurements for prediction accuracy. We utilize bootstrap to construct a 95 confidence interval for prediction accuracy. The model interpretation is further demonstrated by permutation predictor importance. Our study offers an interpretable modeling method suitable for drug TdP risk prediction. Our method can be easily generalized to broader applications of drug side effect assessment.
Equivariant 3DConditional Diffusion Models for Molecular Linker Design ; Fragmentbased drug discovery has been an effective paradigm in earlystage drug development. An open challenge in this area is designing linkers between disconnected molecular fragments of interest to obtain chemicallyrelevant candidate drug molecules. In this work, we propose DiffLinker, an E3equivariant 3Dconditional diffusion model for molecular linker design. Given a set of disconnected fragments, our model places missing atoms in between and designs a molecule incorporating all the initial fragments. Unlike previous approaches that are only able to connect pairs of molecular fragments, our method can link an arbitrary number of fragments. Additionally, the model automatically determines the number of atoms in the linker and its attachment points to the input fragments. We demonstrate that DiffLinker outperforms other methods on the standard datasets generating more diverse and syntheticallyaccessible molecules. Besides, we experimentally test our method in realworld applications, showing that it can successfully generate valid linkers conditioned on target protein pockets.
Datadriven construction of stochastic reduced dynamics encoded with nonMarkovian features ; One important problem in constructing the reduced dynamics of molecular systems is the accurate modeling of the nonMarkovian behavior arising from the dynamics of unresolved variables. The main complication emerges from the lack of scale separations, where the reduced dynamics generally exhibits pronounced memory and nonwhite noise terms. We propose a datadriven approach to learn the reduced model of multidimensional resolved variables that faithfully retains the nonMarkovian dynamics. Different from the common approaches based on the direct construction of the memory function, the present approach seeks a set of nonMarkovian features that encode the history of the resolved variables, and establishes a joint learning of the extended Markovian dynamics in terms of both the resolved variables and these features. The training is based on matching the evolution of the correlation functions of the extended variables that can be directly obtained from the ones of the resolved variables. The constructed model essentially approximates the multidimensional generalized Langevin equation and ensures numerical stability without empirical treatment. We demonstrate the effectiveness of the method by constructing the reduced models of molecular systems in terms of both onedimensional and fourdimensional resolved variables.
CIKQA Learning Commonsense Inference with a Unified Knowledgeintheloop QA Paradigm ; Recently, the community has achieved substantial progress on many commonsense reasoning benchmarks. However, it is still unclear what is learned from the training process the knowledge, inference capability, or both We argue that due to the large scale of commonsense knowledge, it is infeasible to annotate a large enough training set for each task to cover all commonsense for learning. Thus we should separate the commonsense knowledge acquisition and inference over commonsense knowledge as two separate tasks. In this work, we focus on investigating models' commonsense inference capabilities from two perspectives 1 Whether models can know if the knowledge they have is enough to solve the task; 2 Whether models can develop commonsense inference capabilities that generalize across commonsense tasks. We first align commonsense tasks with relevant knowledge from commonsense knowledge bases and ask humans to annotate whether the knowledge is enough or not. Then, we convert different commonsense tasks into a unified question answering format to evaluate models' generalization capabilities. We name the benchmark as Commonsense Inference with Knowledgeintheloop Question Answering CIKQA.
An efficient combination strategy for hybird quantum ensemble classifier ; Quantum machine learning has shown advantages in many ways compared to classical machine learning. In machine learning, a difficult problem is how to learn a model with high robustness and strong generalization ability from a limited feature space. Combining multiple models as base learners, ensemble learning EL can effectively improve the accuracy, generalization ability, and robustness of the final model. The key to EL lies in two aspects, the performance of base learners and the choice of the combination strategy. Recently, quantum EL QEL has been studied. However, existing combination strategies in QEL are inadequate in considering the accuracy and variance among base learners. This paper presents a hybrid EL framework that combines quantum and classical advantages. More importantly, we propose an efficient combination strategy for improving the accuracy of classification in the framework. We verify the feasibility and efficiency of our framework and strategy by using the MNIST dataset. Simulation results show that the hybrid EL framework with our combination strategy not only has a higher accuracy and lower variance than the single model without the ensemble, but also has a better accuracy than the majority voting and the weighted voting strategies in most cases.
Evaluating recent methods to overcome spatial confounding ; The concept of spatial confounding is closely connected to spatial regression, although no general definition has been established. A generally accepted idea of spatial confounding in spatial regression models is the change in fixed effects estimates that may occur when spatially correlated random effects collinear with the covariate are included in the model. Different methods have been proposed to alleviate spatial confounding in spatial linear regression models, but it is not clear if they provide correct fixed effects estimates. In this article, we consider some of those proposals to alleviate spatial confounding such as restricted regression, the spatial model, and transformed Gaussian Markov random fields. The objective is to determine which one provides the best estimates of the fixed effects. Dowry death data in Uttar Pradesh in 2001, stomach cancer incidence data in Slovenia in the period 19952001 and lip cancer incidence data in Scotland between the years 19751980 are analyzed. Several simulation studies are conducted to evaluate the performance of the methods in different scenarios of spatial confounding. Results reflect that the spatial method seems to provide fixed effects estimates closest to the true value.
On free fall of fermions and antifermions ; We propose a model describing spinhalf quantum particles in curved spacetime in the framework of quantum field theory. Our model is based on embodying Einstein's equivalence principle and general covariance in the definition of quantumparticle states. With this model at hand, we compute several observables which characterise spinhalf quantum particles in a gravitational field. In particular, we find that spin precesses in a normal Fermi frame, even in the absence of torsion. The effect appears to be complementary to freefall nonuniversality we have recently reported about for spinless quantum particles. Furthermore, we find that quantumparticle gravitationalpotential energy is insensitive to wavepacket spreading in the Earth's gravitational field, that is responsible for the nonuniversality of free fall in quantum theory. This theoretical result provides another channel for the experimental study of our quantumparticle model by using gravitational spectrometers. Finally, we also find that elementary fermions and antifermions are indistinguishable in gravity.
See Blue Sky Deep Image Dehaze Using Paired and Unpaired Training Images ; The issue of image haze removal has attracted wide attention in recent years. However, most existing haze removal methods cannot restore the scene with clear blue sky, since the color and texture information of the object in the original haze image is insufficient. To remedy this, we propose a cycle generative adversarial network to construct a novel endtoend image dehaze model. We adopt outdoor image datasets to train our model, which includes a set of realworld unpaired image dataset and a set of paired image dataset to ensure that the generated images are close to the real scene. Based on the cycle structure, our model adds four different kinds of loss function to constrain the effect including adversarial loss, cycle consistency loss, photorealism loss and paired L1 loss. These four constraints can improve the overall quality of such degraded images for better visual appeal and ensure reconstruction of images to keep from distortion. The proposed model could remove the haze of images and also restore the sky of images to be clean and blue like captured in a sunny weather.
Pretrained Transformers Do not Always Improve Robustness ; Pretrained Transformers PT have been shown to improve Out of Distribution OOD robustness than traditional models such as Bag of Words BOW, LSTMs, Convolutional Neural Networks CNN powered by Word2Vec and Glove embeddings. How does the robustness comparison hold in a real world setting where some part of the dataset can be noisy Do PT also provide more robust representation than traditional models on exposure to noisy data We perform a comparative study on 10 models and find an empirical evidence that PT provide less robust representation than traditional models on exposure to noisy data. We investigate further and augment PT with an adversarial filtering AF mechanism that has been shown to improve OOD generalization. However, increase in generalization does not necessarily increase robustness, as we find that noisy data fools the AF method powered by PT.
Learning Generalizable Models for Vehicle Routing Problems via Knowledge Distillation ; Recent neural methods for vehicle routing problems always train and test the deep models on the same instance distribution i.e., uniform. To tackle the consequent crossdistribution generalization concerns, we bring the knowledge distillation to this field and propose an Adaptive MultiDistribution Knowledge Distillation AMDKD scheme for learning more generalizable deep models. Particularly, our AMDKD leverages various knowledge from multiple teachers trained on exemplar distributions to yield a lightweight yet generalist student model. Meanwhile, we equip AMDKD with an adaptive strategy that allows the student to concentrate on difficult distributions, so as to absorb hardtomaster knowledge more effectively. Extensive experimental results show that, compared with the baseline neural methods, our AMDKD is able to achieve competitive results on both unseen indistribution and outofdistribution instances, which are either randomly synthesized or adopted from benchmark datasets i.e., TSPLIB and CVRPLIB. Notably, our AMDKD is generic, and consumes less computational resources for inference.
Chiral spectrum of the universal tuned textSU3 times textSU2 times textU1mathbbZ6 4D Ftheory model ; We use the recently developed methods of 2108.07810 to analyze vertical flux backgrounds and associated chiral matter spectra in the 4D universal textSU3 times textSU2 times textU1mathbbZ6 model introduced in 1912.10991, which is believed to describe the most general generic family of Ftheory vacua with tuned textSU3 times textSU2 times textU1mathbbZ6 gauge symmetry. Our analysis focuses on a resolution of a particular presentation of the textSU3 times textSU2 times textU1mathbbZ6 model in which the elliptic fiber is realized as a cubic in mathbbP2 fibered over an arbitrary smooth threefold base. We show that vertical fluxes can produce nonzero multiplicities for all chiral matter families that satisfy 4D anomaly cancellation, which include as a special case the chiral matter families of the Minimal Supersymmetric Standard Model.
Generalized ManyBody Dispersion Correction through Randomphase Approximation for Chemically Accurate Density Functional Theory ; We extend our recently proposed Deep Learningaided manybody dispersion DNNMBD model to quadrupole polarizability Q terms using a generalized Random Phase Approximation RPA formalism, thus enabling the inclusion of van der Waals contributions beyond dipole. The resulting DNNMBDQ model only relies on ab initioderived quantities as the introduced quadrupole polarizabilities are recursively retrieved from dipole ones, in turn modelled via the TkatchenkoScheffler method. A transferable and efficient deepneuronal network DNN provides atom in molecule volumes, while a single rangeseparation parameter is used to couple the model to Density Functional Theory DFT. Since it can be computed at a negligible cost, the DNNMBDQ approach can be coupled with DFT functionals such as PBE,PBE0 and B86bPBE dispersionless. The DNNMBQcorrected functionals reach chemical accuracy while exhibiting lower errors compared to their dipoleonly counterparts.
Representation Learning with Diffusion Models ; Diffusion models DMs have achieved stateoftheart results for image synthesis tasks as well as density estimation. Applied in the latent space of a powerful pretrained autoencoder LDM, their immense computational requirements can be significantly reduced without sacrificing sampling quality. However, DMs and LDMs lack a semantically meaningful representation space as the diffusion process gradually destroys information in the latent variables. We introduce a framework for learning such representations with diffusion models LRDM. To that end, a LDM is conditioned on the representation extracted from the clean image by a separate encoder. In particular, the DM and the representation encoder are trained jointly in order to learn rich representations specific to the generative denoising process. By introducing a tractable representation prior, we can efficiently sample from the representation distribution for unconditional image synthesis without training of any additional model. We demonstrate that i competitive image generation results can be achieved with imageparameterized LDMs, ii LRDMs are capable of learning semantically meaningful representations, allowing for faithful image reconstructions and semantic interpolations. Our implementation is available at httpsgithub.comjeremiastraubdiffusion.
DataDriven Quickest Change Detection in Markov Models ; The problem of quickest change detection in Markov models is studied. A sequence of samples are generated from a Markov model, and at some unknown time, the transition kernel of the Markov model changes. The goal is to detect the change as soon as possible subject to false alarm constraints. The datadriven setting is investigated, where neither the pre nor the postchange Markov transition kernel is known. A kernel based datadriven algorithm is developed, which applies to general state space and is recursive and computationally efficient. Performance bounds on the average running length and worstcase average detection delay are derived. Numerical results are provided to validate the performance of the proposed algorithm.
Learning Feasibility of Factored Nonlinear Programs in Robotic Manipulation Planning ; A factored Nonlinear Program FactoredNLP explicitly models the dependencies between a set of continuous variables and nonlinear constraints, providing an expressive formulation for relevant robotics problems such as manipulation planning or simultaneous localization and mapping. When the problem is overconstrained or infeasible, a fundamental issue is to detect a minimal subset of variables and constraints that are infeasible. Previous approaches require solving several nonlinear programs, incrementally adding and removing constraints, and are thus computationally expensive. In this paper, we propose a graph neural architecture that predicts which variables and constraints are jointly infeasible. The model is trained with a dataset of labeled subgraphs of FactoredNLPs, and importantly, can make useful predictions on larger factored nonlinear programs than the ones seen during training. We evaluate our approach in robotic manipulation planning, where our model is able to generalize to longer manipulation sequences involving more objects and robots, and different geometric environments. The experiments show that the learned model accelerates general algorithms for conflict extraction by a factor of 50 and heuristic algorithms that exploit expert knowledge by a factor of 4.
Adiabaticimpulse approximation in nonHermitian LandauZener Model ; We investigate the transition from PTsymmetry to PTsymmetry breaking and vice versa in the nonHermitian LandauZener LZ models. The energy is generally complex, so the relaxation rate of the system is set by the absolute value of the gap. To illustrate the dynamics of phase transitions, the relative population is introduced to calculate the defect density in nonequilibrium phase transitions instead of the excitations in the Hermitian systems. The result shows that the adiabaticimpulse AI approximation, which is the key concept of the KibbleZurek KZ mechanism in the Hermitian systems, can be generalized to the PTsymmetric nonHermitian LZ models to study the dynamics in the vicinity of a critical point. Therefore, the KZ mechanism in the simplest nonHermitian twolevel models is presented. Finally, an exact solution to the nonHermitian LZlike problem is also shown.
RuCoLA Russian Corpus of Linguistic Acceptability ; Linguistic acceptability LA attracts the attention of the research community due to its many uses, such as testing the grammatical knowledge of language models and filtering implausible texts with acceptability classifiers. However, the application scope of LA in languages other than English is limited due to the lack of highquality resources. To this end, we introduce the Russian Corpus of Linguistic Acceptability RuCoLA, built from the ground up under the wellestablished binary LA approach. RuCoLA consists of 9.8k indomain sentences from linguistic publications and 3.6k outofdomain sentences produced by generative models. The outofdomain set is created to facilitate the practical use of acceptability for improving language generation. Our paper describes the data collection protocol and presents a finegrained analysis of acceptability classification experiments with a range of baseline approaches. In particular, we demonstrate that the most widely used language models still fall behind humans by a large margin, especially when detecting morphological and semantic errors. We release RuCoLA, the code of experiments, and a public leaderboard rucolabenchmark.com to assess the linguistic competence of language models for Russian.
EventCentric Question Answering via Contrastive Learning and Invertible Event Transformation ; Human reading comprehension often requires reasoning of event semantic relations in narratives, represented by Eventcentric QuestionAnswering QA. To address eventcentric QA, we propose a novel QA model with contrastive learning and invertible event transformation, call TranCLR. Our proposed model utilizes an invertible transformation matrix to project semantic vectors of events into a common event embedding space, trained with contrastive learning, and thus naturally inject event semantic knowledge into mainstream QA pipelines. The transformation matrix is finetuned with the annotated event relation types between events that occurred in questions and those in answers, using eventaware question vectors. Experimental results on the Event Semantic Relation Reasoning ESTER dataset show significant improvements in both generative and extractive settings compared to the existing strong baselines, achieving over 8.4 gain in the tokenlevel F1 score and 3.0 gain in Exact Match EM score under the multianswer setting. Qualitative analysis reveals the high quality of the generated answers by TranCLR, demonstrating the feasibility of injecting event knowledge into QA model learning. Our code and models can be found at httpsgithub.comLuJunruTranCLR.
Special Functions for Hyperoctahedral Groups Using Bosonic, Trigonometric SixVertex Models ; Recent works have sought to realize certain families of orthogonal, symmetric polynomials as partition functions of wellchosen classes of solvable lattice models. Many of these use Boltzmann weights arising from the trigonometric sixvertex model Rmatrix or generalizations or specializations of these weights. In this paper, we seek new variants of bosonic models on lattices designed for type BC root systems, whose partition functions match the zonal spherical function in type C. Under general assumptions, we find that this is possible for all highest weights in rank 2 and 3, but not for higher rank.
MEWUNet Multiaxis representation learning in frequency domain for medical image segmentation ; Recently, Visual Transformer ViT has been widely used in various fields of computer vision due to applying selfattention mechanism in the spatial domain to modeling global knowledge. Especially in medical image segmentation MIS, many works are devoted to combining ViT and CNN, and even some works directly utilize pure ViTbased models. However, recent works improved models in the aspect of spatial domain while ignoring the importance of frequency domain information. Therefore, we propose Multiaxis External Weights UNet MEWUNet for MIS based on the Ushape architecture by replacing selfattention in ViT with our Multiaxis External Weights block. Specifically, our block performs a Fourier transform on the three axes of the input feature and assigns the external weight in the frequency domain, which is generated by our Weights Generator. Then, an inverse Fourier transform is performed to change the features back to the spatial domain. We evaluate our model on four datasets and achieve stateoftheart performances. In particular, on the Synapse dataset, our method outperforms MTUNet by 10.15mm in terms of HD95. Code is available at httpsgithub.comJCruan519MEWUNet.
WaveBound Dynamic Error Bounds for Stable Time Series Forecasting ; Time series forecasting has become a critical task due to its high practicality in realworld applications such as traffic, energy consumption, economics and finance, and disease analysis. Recent deeplearningbased approaches have shown remarkable success in time series forecasting. Nonetheless, due to the dynamics of time series data, deep networks still suffer from unstable training and overfitting. Inconsistent patterns appearing in realworld data lead the model to be biased to a particular pattern, thus limiting the generalization. In this work, we introduce the dynamic error bounds on training loss to address the overfitting issue in time series forecasting. Consequently, we propose a regularization method called WaveBound which estimates the adequate error bounds of training loss for each time step and feature at each iteration. By allowing the model to focus less on unpredictable data, WaveBound stabilizes the training process, thus significantly improving generalization. With the extensive experiments, we show that WaveBound consistently improves upon the existing models in large margins, including the stateoftheart model.
SlowTwisting inflationary attractors ; We explore the dynamics of multifield inflationary models. We first revisit the twofield case and rederive the coordinate independent expression for the attractor solution with either small or large turn rate, emphasizing the role of isometries for the existence of rapidturn solutions. Then, for three fields in the slowroll, slowtwist and extreme turning regime we provide elegant expressions for the attractor solution for generic fieldspace geometries and potentials and study the behaviour of first order perturbations. For generic mathcalNfield models, our method quickly grows in algebraic complexity. We observe that fieldspace isometries are common in the literature and we are able to obtain the attractor solutions and deduce stability for some isometry classes of mathcalNfield models. Finally, we apply our discussion to concrete supergravity models. These analyses conclusively demonstrate the existence of mathcalN2 dynamical attractors distinct from the twofield case, and provide tools useful for future studies of their phenomenology in the cosmic microwave background and stochastic gravitational wave spectrum.
Solving Audio Inverse Problems with a Diffusion Model ; This paper presents CQTDiff, a datadriven generative audio model that can, once trained, be used for solving various different audio inverse problems in a problemagnostic setting. CQTDiff is a neural diffusion model with an architecture that is carefully constructed to exploit pitchequivariant symmetries in music. This is achieved by preconditioning the model with an invertible ConstantQ Transform CQT, whose logarithmicallyspaced frequency axis represents pitch equivariance as translation equivariance. The proposed method is evaluated with objective and subjective metrics in three different and varied tasks audio bandwidth extension, inpainting, and declipping. The results show that CQTDiff outperforms the compared baselines and ablations in audio bandwidth extension and, without retraining, delivers competitive performance against modern baselines in audio inpainting and declipping. This work represents the first diffusionbased general framework for solving inverse problems in audio processing.
JustDREAMaboutit Figurative Language Understanding with DREAMFLUTE ; Figurative language e.g., he flew like the wind is challenging to understand, as it is hard to tell what implicit information is being conveyed from the surface form alone. We hypothesize that to perform this task well, the reader needs to mentally elaborate the scene being described to identify a sensible meaning of the language. We present DREAMFLUTE, a figurative language understanding system that does this, first forming a mental model of situations described in a premise and hypothesis before making an entailmentcontradiction decision and generating an explanation. DREAMFLUTE uses an existing scene elaboration model, DREAM, for constructing its mental model. In the FigLang2022 Shared Task evaluation, DREAMFLUTE achieved joint first place Acc6063.3, and can perform even better with ensemble techniques, demonstrating the effectiveness of this approach. More generally, this work suggests that adding a reflective component to pretrained language models can improve their performance beyond standard finetuning 3.3 improvement in Acc60.
PassageMask A Learnable Regularization Strategy for RetrieverReader Models ; Retrieverreader models achieve competitive performance across many different NLP tasks such as open question answering and dialogue conversations. In this work, we notice these models easily overfit the toprank retrieval passages and standard training fails to reason over the entire retrieval passages. We introduce a learnable passage mask mechanism which desensitizes the impact from the toprank retrieval passages and prevents the model from overfitting. Controlling the gradient variance with fewer mask candidates and selecting the mask candidates with oneshot bilevel optimization, our learnable regularization strategy enforces the answer generation to focus on the entire retrieval passages. Experiments on different tasks across open question answering, dialogue conversation, and fact verification show that our method consistently outperforms its baselines. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.
HumanintheLoop Mixup ; Aligning model representations to humans has been found to improve robustness and generalization. However, such methods often focus on standard observational data. Synthetic data is proliferating and powering many advances in machine learning; yet, it is not always clear whether synthetic labels are perceptually aligned to humans rendering it likely model representations are not human aligned. We focus on the synthetic data used in mixup a powerful regularizer shown to improve model robustness, generalization, and calibration. We design a comprehensive series of elicitation interfaces, which we release as HILL MixE Suite, and recruit 159 participants to provide perceptual judgments along with their uncertainties, over mixup examples. We find that human perceptions do not consistently align with the labels traditionally used for synthetic points, and begin to demonstrate the applicability of these findings to potentially increase the reliability of downstream models, particularly when incorporating human uncertainty. We release all elicited judgments in a new data hub we call HMix.
Stochastic resetting in a networked multiparticle system with correlated transitions ; The state of many physical, biological and sociotechnical systems evolves by combining smooth local transitions and abrupt resetting events to a set of reference values. The inclusion of the resetting mechanism not only provides the possibility of modeling a wide variety of realistic systems but also leads to interesting novel phenomenology not present in resetfree cases. However, most models where stochastic resetting is studied address the case of a finite number of uncorrelated variables, commonly a single one, such as the position of noninteracting random walkers. Here we overcome this limitation by framing the process of network growth with node deletion as a stochastic resetting problem where an arbitrarily large number of degrees of freedom are coupled and influence each other, both in the resetting and nonresetting growth events. We find the exact, fulltime solution of the model, and several outofequilibrium properties are characterized as function of the growth and resetting rates, such as the emergence of a timedependent percolationlike phase transition, and firstpassage statistics. Coupled multiparticle systems subjected to resetting are a necessary generalization in the theory of stochastic resetting, and the model presented herein serves as an illustrative, natural and solvable example of such a generalization.
Physics Informed Machine Learning for Chemistry Tabulation ; Modeling of turbulent combustion system requires modeling the underlying chemistry and the turbulent flow. Solving both systems simultaneously is computationally prohibitive. Instead, given the difference in scales at which the two subsystems evolve, the two subsystems are typically resolved separately. Popular approaches such as the Flamelet Generated Manifolds FGM use a twostep strategy where the governing reaction kinetics are precomputed and mapped to a lowdimensional manifold, characterized by a few reaction progress variables model reduction and the manifold is then lookedup'' during the runtime to estimate the highdimensional system state by the flow system. While existing works have focused on these two steps independently, in this work we show that joint learning of the progress variables and the lookup model, can yield more accurate results. We build on the base formulation and implementation ChemTab to include the dynamically generated Themochemical State Variables Lower Dimensional Dynamic Source Terms. We discuss the challenges in the implementation of this deep neural network architecture and experimentally demonstrate it's superior performance.
Suffix RetrievalAugmented Language Modeling ; Causal language modeling LM uses word history to predict the next word. BERT, on the other hand, makes use of bidirectional word information in a sentence to predict words at masked positions. While BERT is effective in sequence encoding, it is noncausal by nature and is not designed for sequence generation. In this paper, we propose a novel language model, SUffix REtrievalAugmented LM SUREALM, that simulates a bidirectional contextual effect in an autoregressive manner. SUREALM employs an embedding retriever to search for training sentences in a data store that share similar word history during sequence generation. In particular, the suffix portions of the retrieved sentences mimick the future context. We evaluated our proposed model on the DSTC9 spoken dialogue corpus and showed promising word perplexity reduction on the validation and test set compared to competitive baselines.
Computational anatomy atlas using multilayer perceptron with Lipschitz regularization ; A computational anatomy atlas is a set of internal organ geometries. It is based on data of real patients and complemented with virtual cases by using a some numerical approach. Atlases are in demand in computational physiology, especially in cardiological and neurophysiological applications. Usually, atlas generation uses explicit object representation, such as voxel models or surface meshes. In this paper, we propose a method of atlas generation using an implicit representation of 3D objects. Our approach has two key stages. The first stage converts voxel models of segmented organs to implicit form using the usual multilayer perceptron. This stage smooths the model and reduces memory consumption. The second stage uses a multilayer perceptron with Lipschitz regularization. This neural network provides a smooth transition between implicitly defined 3D geometries. Our work shows examples of models of the left and right human ventricles. All code and data for this work are open.
Plant model generation from event log using ProM for formal verification of CPS ; This paper introduces the concept of plant model generation from the recorded traces of events using the process mining technique. The event logs are obtained by visually simulating a simple distributed manufacturing system using the OPC UA communication protocol. The process discovery alpha algorithm is used to extract the process model in Petri net format. The system behavior represented in terms of Petri net is then fed to construct the ECC of the basic function block in compliance with IEC 61499 standard using the proposed notation. Finally, the formal verification of the closedloop system is done with the help of a tool chain that consists of fb2smv converter, symbolic model checker NuSMV, and other tools for representing counterexamples.
Robosourcing Educational Resources Leveraging Large Language Models for Learnersourcing ; In this article, we introduce and evaluate the concept of robosourcing for creating educational content. Robosourcing lies in the intersection of crowdsourcing and large language models, where instead of a crowd of humans, requests to large language models replace some of the work traditionally performed by the crowd. Robosourcing includes a humanintheloop to provide priming input as well as to evaluate and potentially adjust the generated artefacts; these evaluations could also be used to improve the large language models. We propose a system to outline the robosourcing process. We further study the feasibility of robosourcing in the context of education by conducting an evaluation of robosourced and programming exercises, generated using OpenAI Codex. Our results suggest that robosourcing could significantly reduce human effort in creating diverse educational content while maintaining quality similar to humancreated content.
Causal Modeling of Soil Processes for Improved Generalization ; Measuring and monitoring soil organic carbon is critical for agricultural productivity and for addressing critical environmental problems. Soil organic carbon not only enriches nutrition in soil, but also has a gamut of cobenefits such as improving water storage and limiting physical erosion. Despite a litany of work in soil organic carbon estimation, current approaches do not generalize well across soil conditions and management practices. We empirically show that explicit modeling of causeandeffect relationships among the soil processes improves the outofdistribution generalizability of prediction models. We provide a comparative analysis of soil organic carbon estimation models where the skeleton is estimated using causal discovery methods. Our framework provide an average improvement of 81 in test mean squared error and 52 in test mean absolute error.
The CRINGE Loss Learning what language not to model ; Standard language model training employs gold human documents or humanhuman interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the CRINGE loss ContRastive Iterative Negative GEneration. We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and opendomain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.
Understanding ME Multimodal Evaluation for Finegrained Visual Commonsense ; Visual commonsense understanding requires Vision Language VL models to not only understand image and text but also crossreference inbetween to fully integrate and achieve comprehension of the visual scene described. Recently, various approaches have been developed and have achieved high performance on visual commonsense benchmarks. However, it is unclear whether the models really understand the visual scene and underlying commonsense knowledge due to limited evaluation data resources. To provide an indepth analysis, we present a Multimodal Evaluation ME pipeline to automatically generate questionanswer pairs to test models' understanding of the visual scene, text, and related knowledge. We then take a step further to show that training with the ME data boosts the model's performance in standard VCR evaluation. Lastly, our indepth analysis and comparison reveal interesting findings 1 semantically lowlevel information can assist the learning of highlevel information but not the opposite; 2 visual information is generally under utilization compared with text.
EnergyBased Residual Latent Transport for Unsupervised Point Cloud Completion ; Unsupervised point cloud completion aims to infer the whole geometry of a partial object observation without requiring partialcomplete correspondence. Differing from existing deterministic approaches, we advocate generative modeling based unsupervised point cloud completion to explore the missing correspondence. Specifically, we propose a novel framework that performs completion by transforming a partial shape encoding into a complete one using a latent transport module, and it is designed as a latentspace energybased model EBM in an encoderdecoder architecture, aiming to learn a probability distribution conditioned on the partial shape encoding. To train the latent code transport module and the encoderdecoder network jointly, we introduce a residual sampling strategy, where the residual captures the domain gap between partial and complete shape latent spaces. As a generative modelbased framework, our method can produce uncertainty maps consistent with human perception, leading to explainable unsupervised point cloud completion. We experimentally show that the proposed method produces highfidelity completion results, outperforming stateoftheart models by a significant margin.
Bulk viscous fluid in extended symmetric teleparallel gravity ; In this paper, we investigate the existence of bulk viscous FLRW cosmological models in a recently proposed extended symmetric teleparallel gravity or fleft Q,Tright gravity in which Q is the nonmetricity and T is the trace of the energymomentum tensor. We consider a simple coupling between matter and nonmetricity, specifically, fleft Q,Tright alpha Qm1lambda T and fleft Q,Tright alpha Qlambda T where alpha , lambda and m are free model parameters. The exact cosmological solutions are found by assuming the scale factor in the form of the hybrid expansion law. This type of relation generates a timevarying deceleration parameter with the transition of the Universe from the early decelerating phase to the present accelerating phase. In the presence of viscous fluid, we analyze some cosmological parameters of our cosmological model such as the energy density, bulk viscous pressure, bulk viscous coefficient, equation of state parameter, and energy conditions. Finally, we conclude our fleft Q,Tright cosmological models agree with the recent astronomical observations.
Universal Distributional Decisionbased Blackbox Adversarial Attack with Reinforcement Learning ; The vulnerability of the highperformance machine learning models implies a security risk in applications with realworld consequences. Research on adversarial attacks is beneficial in guiding the development of machine learning models on the one hand and finding targeted defenses on the other. However, most of the adversarial attacks today leverage the gradient or logit information from the models to generate adversarial perturbation. Works in the more realistic domain decisionbased attacks, which generate adversarial perturbation solely based on observing the output label of the targeted model, are still relatively rare and mostly use gradientestimation strategies. In this work, we propose a pixelwise decisionbased attack algorithm that finds a distribution of adversarial perturbation through a reinforcement learning algorithm. We call this method Decisionbased Blackbox Attack with Reinforcement learning DBAR. Experiments show that the proposed approach outperforms stateoftheart decisionbased attacks with a higher attack success rate and greater transferability.
A Gibbsian random tree with nearest neighbour interaction ; We revisit the random tree model with nearestneighbour interaction as described in previous work, enhancing growth. When the underlying free Bienaym'eGaltonWatson BGW model is subcritical, we show that the nonMarkov model with interaction exhibits a phase transition between sub and supercritical regimes. In the critical regime, using tools from dynamical systems, we show that the partition function of the model approaches a limit at rate n1 in the generation number n. In the critical regime with almost sure extinction, we also prove that the mean number of external nodes in the tree at generation n decays like n2. Finally, we give a spin representation of the random tree, opening the way to tools from the theory of Gibbs states, including FKG inequalities. We extend the construction in previous work when the law of the branching mechanism of the free BGW process has unbounded support.
NorMatch Matching Normalizing Flows with Discriminative Classifiers for SemiSupervised Learning ; SemiSupervised Learning SSL aims to learn a model using a tiny labeled set and massive amounts of unlabeled data. To better exploit the unlabeled data the latest SSL methods use pseudolabels predicted from a single discriminative classifier. However, the generated pseudolabels are inevitably linked to inherent confirmation bias and noise which greatly affects the model performance. In this work we introduce a new framework for SSL named NorMatch. Firstly, we introduce a new uncertainty estimation scheme based on normalizing flows, as an auxiliary classifier, to enforce highly certain pseudolabels yielding a boost of the discriminative classifiers. Secondly, we introduce a thresholdfree sample weighting strategy to exploit better both high and low confidence pseudolabels. Furthermore, we utilize normalizing flows to model, in an unsupervised fashion, the distribution of unlabeled data. This modelling assumption can further improve the performance of generative classifiers via unlabeled data, and thus, implicitly contributing to training a better discriminative classifier. We demonstrate, through numerical and visual results, that NorMatch achieves stateoftheart performance on several datasets.
Latent SHAP Toward Practical HumanInterpretable Explanations ; Model agnostic feature attribution algorithms such as SHAP and LIME are ubiquitous techniques for explaining the decisions of complex classification models, such as deep neural networks. However, since complex classification models produce superior performance when trained on lowlevel or encoded features, in many cases, the explanations generated by these algorithms are neither interpretable nor usable by humans. Methods proposed in recent studies that support the generation of humaninterpretable explanations are impractical, because they require a fully invertible transformation function that maps the model's input features to the humaninterpretable features. In this work, we introduce Latent SHAP, a blackbox feature attribution framework that provides humaninterpretable explanations, without the requirement for a fully invertible transformation function. We demonstrate Latent SHAP's effectiveness using 1 a controlled experiment where invertible transformation functions are available, which enables robust quantitative evaluation of our method, and 2 celebrity attractiveness classification using the CelebA dataset where invertible transformation functions are not available, which enables thorough qualitative evaluation of our method.
Causal Inference with Confounders MNAR under Treatmentindependent Missingness Assumption ; Causal inference in observational studies can be challenging when confounders are subject to missingness. Generally, the identification of causal effects is not guaranteed even under restrictive parametric model assumptions when confounders are missing not at random. To address this, We propose a general framework to establish the identification of causal effects when confounders are subject to treatmentindependent missingness, which means that the missing data mechanism is independent of the treatment, given the outcome and possibly missing confounders. We give special consideration to commonlyused models for continuous and binary outcomes and provide counterexamples when identification fails. For estimation, we provide a weighted estimation equation estimating method for model parameters and purpose three estimators for the average causal effect based on the estimated models. We evaluate the finitesample performance of the estimators via simulations. We further illustrate the proposed method with real data sets from the National Health and Nutrition Examination Survey.
Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality ; Recent datasets expose the lack of the systematic generalization ability in standard sequencetosequence models. In this work, we analyze this behavior of seq2seq models and identify two contributing factors a lack of mutual exclusivity bias i.e., a source sequence already mapped to a target sequence is less likely to be mapped to other target sequences, and the tendency to memorize whole examples rather than separating structures from contents. We propose two techniques to address these two issues respectively Mutual Exclusivity Training that prevents the model from producing seen generations when facing novel, unseen examples via an unlikelihoodbased loss; and prim2primX data augmentation that automatically diversifies the arguments of every syntactic function to prevent memorizing and provide a compositional inductive bias without exposing testset data. Combining these two techniques, we show substantial empirical improvements using standard sequencetosequence models LSTMs and Transformers on two widelyused compositionality datasets SCAN and COGS. Finally, we provide analysis characterizing the improvements as well as the remaining challenges, and provide detailed ablations of our method. Our code is available at httpsgithub.comowenzxmetprimaug
A validation study of normoglycemia and dysglycemia indices as a diabetes risk model ; In this work, we test the performance of Peak glucose concentration A and average of glucose removal rates alpha, as normoglycemia and dysglycemia indices on a population monitored at the Mexico General Hospital between the years 2017 2019. A total of 1911 volunteer patients at the Mexico General Hospital are considered. 1282 female patients age ranging from 17 to 80 years old, and 629 male patients age ranging from 18 to 79 years old. For each volunteer, OGTT data is gathered and indices are estimated in Ackerman's model. A binary separation of normoglycemic and disglycemic patients using a Support Vector Machine with a linear kernel is carried out. Classification indices are successful for 83. Population clusters on diabetic conditions and progression from Normoglycemic to T2DM may be concluded. The classification indices, A and alpha may be regarded as patient's indices and used to detect diabetes risk. Also, criteria for the applicability of glucoseinsulin regulation models are introduced. The performance of Ackerman's model is shown.
Synthetic data enable experiments in atomistic machine learning ; Machinelearning models are increasingly used to predict properties of atoms in chemical systems. There have been major advances in developing descriptors and regression frameworks for this task, typically starting from relatively small sets of quantummechanical reference data. Larger datasets of this kind are becoming available, but remain expensive to generate. Here we demonstrate the use of a large dataset that we have synthetically labelled with peratom energies from an existing ML potential model. The cheapness of this process, compared to the quantummechanical ground truth, allows us to generate millions of datapoints, in turn enabling rapid experimentation with atomistic ML models from the small to the largedata regime. This approach allows us here to compare regression frameworks in depth, and to explore visualisation based on learned representations. We also show that learning synthetic data labels can be a useful pretraining task for subsequent finetuning on small datasets. In the future, we expect that our opensourced dataset, and similar ones, will be useful in rapidly exploring deeplearning models in the limit of abundant chemical data.
Probing New Physics in the Vectorlike Lepton Model by Lepton Electric Dipole Moments ; We examine the lepton dipole moments in an extension of the Standard Model SM, which contains vectorlike leptons that couple only to the secondgeneration SM leptons. The model naturally leads to sizable contributions to the muon g2 and the muon electric dipole moment EDM. One feature of this model is that a sizable electron EDM is also induced at the twoloop level due to the existence of new vectorlike leptons in the loops. We find parameter regions that can explain the muon g2 anomaly and are also consistent with the experimental constraints coming from the electron EDM and the Higgs decay hrightarrow mumu. The generated EDMs can be as large as O1022e cdot mathrmcm for the muon and O1030e cdot mathrmcm for the electron, respectively, which can be probed in future experiments for the EDM measurements.
Enabling Fast Unit Commitment Constraint Screening via Learning Cost Model ; Unit commitment UC are essential tools to transmission system operators for finding the most economical and feasible generation schedules and dispatch signals. Constraint screening has been receiving attention as it holds the promise for reducing a number of inactive or redundant constraints in the UC problem, so that the solution process of large scale UC problem can be accelerated by considering the reduced optimization problem. Standard constraint screening approach relies on optimizing over load and generations to find binding line flow constraints, yet the screening is conservative with a large percentage of constraints still reserved for the UC problem. In this paper, we propose a novel machine learning ML model to predict the most economical costs given load inputs. Such ML model bridges the cost perspectives of UC decisions to the optimizationbased constraint screening model, and can screen out higher proportion of operational constraints. We verify the proposed method's performance on both sampleaware and sampleagnostic setting, and illustrate the proposed scheme can further reduce the computation time on a variety of setup for UC problems.
ZeroShot Image Restoration Using Denoising Diffusion NullSpace Model ; Most existing Image Restoration IR models are taskspecific, which can not be generalized to different degradation operators. In this work, we propose the Denoising Diffusion NullSpace Model DDNM, a novel zeroshot framework for arbitrary linear IR problems, including but not limited to image superresolution, colorization, inpainting, compressed sensing, and deblurring. DDNM only needs a pretrained offtheshelf diffusion model as the generative prior, without any extra training or network modifications. By refining only the nullspace contents during the reverse diffusion process, we can yield diverse results satisfying both data consistency and realness. We further propose an enhanced and robust version, dubbed DDNM, to support noisy restoration and improve restoration quality for hard tasks. Our experiments on several IR tasks reveal that DDNM outperforms other stateoftheart zeroshot IR methods. We also demonstrate that DDNM can solve complex realworld applications, e.g., old photo restoration.
Weakly Supervised Annotations for Multimodal Greeting Cards Dataset ; In recent years, there is a growing number of pretrained models trained on a large corpus of data and yielding good performance on various tasks such as classifying multimodal datasets. These models have shown good performance on natural images but are not fully explored for scarce abstract concepts in images. In this work, we introduce an imagetextbased dataset called Greeting Cards. Dataset GCD that has abstract visual concepts. In our work, we propose to aggregate features from pretrained images and text embeddings to learn abstract visual concepts from GCD. This allows us to learn the textmodified image features, which combine complementary and redundant information from the multimodal data streams into a single, meaningful feature. Secondly, the captions for the GCD dataset are computed with the pretrained CLIPbased image captioning model. Finally, we also demonstrate that the proposed the dataset is also useful for generating greeting card images using pretrained texttoimage generation model.
On the Change of Decision Boundaries and Loss in Learning with Concept Drift ; The notion of concept drift refers to the phenomenon that the distribution generating the observed data changes over time. If drift is present, machine learning models may become inaccurate and need adjustment. Many technologies for learning with drift rely on the interleaved testtrain error ITTE as a quantity which approximates the model generalization error and triggers drift detection and model updates. In this work, we investigate in how far this procedure is mathematically justified. More precisely, we relate a change of the ITTE to the presence of real drift, i.e., a changed posterior, and to a change of the training result under the assumption of optimality. We support our theoretical findings by empirical evidence for several learning algorithms, models, and datasets.
BlendGAN Learning and Blending the Internal Distributions of Single Images by Spatial ImageIdentity Conditioning ; Training a generative model on a single image has drawn significant attention in recent years. Single image generative methods are designed to learn the internal patch distribution of a single natural image at multiple scales. These models can be used for drawing diverse samples that semantically resemble the training image, as well as for solving many image editing and restoration tasks that involve that particular image. Here, we introduce an extended framework, which allows to simultaneously learn the internal distributions of several images, by using a single model with spatially varying imageidentity conditioning. Our BlendGAN opens the door to applications that are not supported by singleimage models, including morphing, melding, and structuretexture fusion between two or more arbitrary images.
Controllable Image Captioning via Prompting ; Despite the remarkable progress of image captioning, existing captioners typically lack the controllable capability to generate desired image captions, e.g., describing the image in a rough or detailed manner, in a factual or emotional view, etc. In this paper, we show that a unified model is qualified to perform well in diverse domains and freely switch among multiple styles. Such a controllable capability is achieved by embedding the prompt learning into the image captioning framework. To be specific, we design a set of prompts to finetune the pretrained image captioner. These prompts allow the model to absorb stylized data from different domains for joint training, without performance degradation in each domain. Furthermore, we optimize the prompts with learnable vectors in the continuous word embedding space, avoiding the heuristic prompt engineering and meanwhile exhibiting superior performance. In the inference stage, our model is able to generate desired stylized captions by choosing the corresponding prompts. Extensive experiments verify the controllable capability of the proposed method. Notably, we achieve outstanding performance on two diverse image captioning benchmarks including COCO Karpathy split and TextCaps using a unified model.
The Optimality of Blocking Designs in Equally and Unequally Allocated Randomized Experiments with General Response ; We consider the performance of the differenceinmeans estimator in a twoarm randomized experiment under common experimental endpoints such as continuous regression, incidence, proportion and survival. We examine performance under both equal and unequal allocation to treatment groups and we consider both the Neyman randomization model and the population model. We show that in the Neyman model, where the only source of randomness is the treatment manipulation, there is no free lunch complete randomization is minimax for the estimator's mean squared error. In the population model, where each subject experiences response noise with zero mean, the optimal design is the deterministic perfectbalance allocation. However, this allocation is generally NPhard to compute and moreover, depends on unknown response parameters. When considering the tail criterion of Kapelner et al. 2021, we show the optimal design is less random than complete randomization and more random than the deterministic perfectbalance allocation. We prove that Fisher's blocking design provides the asymptotically optimal degree of experimental randomness. Theoretical results are supported by simulations in all considered experimental settings.
Query Your Model with Definitions in FrameNet An Effective Method for Frame Semantic Role Labeling ; Frame Semantic Role Labeling FSRL identifies arguments and labels them with frame semantic roles defined in FrameNet. Previous researches tend to divide FSRL into argument identification and role classification. Such methods usually model role classification as naive multiclass classification and treat arguments individually, which neglects label semantics and interactions between arguments and thus hindering performance and generalization of models. In this paper, we propose a querybased framework named ArGument Extractor with Definitions in FrameNet AGED to mitigate these problems. Definitions of frames and frame elements FEs in FrameNet can be used to query arguments in text. Encoding textdefinition pairs can guide models in learning label semantics and strengthening argument interactions. Experiments show that AGED outperforms previous stateoftheart by up to 1.3 F1score in two FrameNet datasets and the generalization power of AGED in zeroshot and fewshot scenarios. Our code and technical appendix is available at httpsgithub.comPKUnlpiclerAGED.