text
stringlengths
62
2.94k
A Unified Theory of Free Energy Functionals and Applications to Diffusion ; Free energy functionals of GinzburgLandau type lie at the heart of a broad class of continuum dynamical models, such as the CahnHilliard and SwiftHohenberg equations. Despite the wide use of such models, the assumptions embodied in the free energy functionals are frequently either poorly justified or lead to physically opaque parameters. Here, we introduce a mathematically rigorous pathway for constructing free energy functionals that generalizes beyond the constraints of GinzburgLandau gradient expansions. We show that the new formalism unifies existing free energetic descriptions under a single umbrella by establishing the criteria under which the generalized free energy reduces to gradientbased representations. Consequently, we derive a precise physical interpretation of the gradient energy parameter in the CahnHilliard model as the product of an interaction length scale and the free energy curvature. The practical impact of our approach is demonstrated using both a model free energy function and the silicongermanium alloy system.
Generalized Rectifier Wavelet Covariance Models For Texture Synthesis ; Stateoftheart maximum entropy models for texture synthesis are built from statistics relying on image representations defined by convolutional neural networks CNN. Such representations capture rich structures in texture images, outperforming waveletbased representations in this regard. However, conversely to neural networks, wavelets offer meaningful representations, as they are known to detect structures at multiple scales e.g. edges in images. In this work, we propose a family of statistics built upon nonlinear wavelet based representations, that can be viewed as a particular instance of a onelayer CNN, using a generalized rectifier nonlinearity. These statistics significantly improve the visual quality of previous classical waveletbased models, and allow one to produce syntheses of similar quality to stateoftheart models, on both grayscale and color textures.
Nonneural Models Matter A Reevaluation of Neural Referring Expression Generation Systems ; In recent years, neural models have often outperformed rulebased and classic Machine Learning approaches in NLG. These classic approaches are now often disregarded, for example when new neural models are evaluated. We argue that they should not be overlooked, since, for some tasks, welldesigned nonneural approaches achieve better performance than neural ones. In this paper, the task of generating referring expressions in linguistic context is used as an example. We examined two very different English datasets WEBNLG and WSJ, and evaluated each algorithm using both automatic and human evaluations. Overall, the results of these evaluations suggest that rulebased systems with simple rule sets achieve onpar or better performance on both datasets compared to stateoftheart neural REG systems. In the case of the more realistic dataset, WSJ, a machine learningbased system with welldesigned linguistic features performed best. We hope that our work can encourage researchers to consider nonneural models in future.
Existentially closed measurepreserving actions of free groups ; This paper is motivated by the study of probability measurepreserving pmp actions of free groups using continuous model theory. Such an action is treated as a metric structure that consists of the measure algebra of the probability measure space expanded by a family of its automorphisms. We prove that the existentially closed pmp actions of a given free group form an elementary class, and therefore the theory of pmp mathbbFkactions has a model companion. We show this model companion is stable and has quantifier elimination. We also prove that the action of mathbbFk on its profinite completion with the Haar measure is metrically generic and therefore, as we show, it is existentially closed. We deduce our main result from a more general theorem, which gives a set of sufficient conditions for the existence of a model companion for the theory of mathbbFkactions on a separably categorical, stable metric structure.
Learning Whole Heart Mesh Generation From Patient Images For Computational Simulations ; Patientspecific cardiac modeling combines geometries of the heart derived from medical images and biophysical simulations to predict various aspects of cardiac function. However, generating simulationsuitable models of the heart from patient image data often requires complicated procedures and significant human effort. We present a fast and automated deeplearning method to construct simulationsuitable models of the heart from medical images. The approach constructs meshes from 3D patient images by learning to deform a small set of deformation handles on a whole heart template. For both 3D CT and MR data, this method achieves promising accuracy for whole heart reconstruction, consistently outperforming prior methods in constructing simulationsuitable meshes of the heart. When evaluated on timeseries CT data, this method produced more anatomically and temporally consistent geometries than prior methods, and was able to produce geometries that better satisfy modeling requirements for cardiac flow simulations. Our source code will be available on GitHub.
Performance of Deep Learning models with transfer learning for multiplestepahead forecasts in monthly time series ; Deep Learning and transfer learning models are being used to generate time series forecasts; however, there is scarce evidence about their performance prediction that it is more evident for monthly time series. The purpose of this paper is to compare Deep Learning models with transfer learning and without transfer learning and other traditional methods used for monthly forecasts to answer three questions about the suitability of Deep Learning and Transfer Learning to generate predictions of time series. Time series of M4 and M3 competitions were used for the experiments. The results suggest that deep learning models based on TCN, LSTM, and CNN with transfer learning tend to surpass the performance prediction of other traditional methods. On the other hand, TCN and LSTM, trained directly on the target time series, got similar or better performance than traditional methods for some forecast horizons.
On the Effect of PreProcessing and Model Complexity for Plastic Analysis Using ShortWaveInfrared HyperSpectral Imaging ; The importance of plastic waste recycling is undeniable. In this respect, computer vision and deep learning enable solutions through the automated analysis of shortwaveinfrared hyperspectral images of plastics. In this paper, we offer an exhaustive empirical study to show the importance of efficient model selection for resolving the task of hyperspectral image segmentation of various plastic flakes using deep learning. We assess the complexity level of generic and specialized models and infer their performance capacity generic models are often unnecessarily complex. We introduce two variants of a specialized hyperspectral architecture, PlasticNet, that outperforms several wellknown segmentation architectures in both performance as well as computational complexity. In addition, we shed lights on the significance of signal preprocessing within the realm of hyperspectral imaging. To complete our contribution, we introduce the largest, most versatile hyperspectral dataset of plastic flakes of four primary polymer types.
On the Modeling and Simulation of Portfolio Allocation Schemes an Approach based on Network Community Detection ; We present a study on portfolio investments in financial applications. We describe a general modeling and simulation framework and study the impact on the use of different metrics to measure the correlation among assets. In particular, besides the traditional Pearson's correlation, we employ the Detrended CrossCorrelation Analysis DCCA and Detrended Partial CrossCorrelation Analysis DPCCA. Moreover, a novel portfolio allocation scheme is introduced that treats assets as a complex network and uses modularity to detect communities of correlated assets. Weights of the allocation are then distributed among different communities for the sake of diversification. Simulations compare this novel scheme against Critical Line Algorithm CLA, Inverse Variance Portfolio IVP, the Hierarchical Risk Parity HRP. Synthetic times series are generated using the Gaussian model, Geometric Brownian motion, GARCH, ARFIMA and modified ARFIMA models. Results show that the proposed scheme outperforms state of the art approaches in many scenarios. We also validate simulation results via backtesting, whose results confirm the viability of the proposal.
What to Hide from Your Students AttentionGuided Masked Image Modeling ; Transformers and masked language modeling are quickly being adopted and explored in computer vision as vision transformers and masked image modeling MIM. In this work, we argue that image token masking differs from token masking in text, due to the amount and correlation of tokens in an image. In particular, to generate a challenging pretext task for MIM, we advocate a shift from random masking to informed masking. We develop and exhibit this idea in the context of distillationbased MIM, where a teacher transformer encoder generates an attention map, which we use to guide masking for the student. We thus introduce a novel masking strategy, called attentionguided masking AttMask, and we demonstrate its effectiveness over random masking for dense distillationbased MIM as well as plain distillationbased selfsupervised learning on classification tokens. We confirm that AttMask accelerates the learning process and improves the performance on a variety of downstream tasks. We provide the implementation code at httpsgithub.comgkakogeorgiouattmask.
Models of modified FR,T and cuscuton braneworld ; This work deals with braneworld models in the presence of scalar fields in a fivedimensional warped geometry with a single extra dimension of infinite extent. We consider generalized models in the presence of the Ricci scalar, the trace of the stressenergy tensor and the cuscuton contribution. The models describe novel braneworld scenarios and the investigations consider distinct possibilities, from which we show how the brane may change to become thinner, although keeping gravitational stability and the gravity zero mode under strict control. Moreover, we did not identify any split behavior in the warp factor in the presence of the cuscuton and the trace of the stressenergy tensor.
Modified Equations of State for Dark Energy and Observational Limitations ; Cosmological models with variable and modified equations of state for dark energy are confronted with observational data, including Type Ia supernovae, Hubble parameter data Hz from different sources, and observational manifestations of cosmic microwave background radiation CMB. We consider scenarios generalizing the LambdaCDM, wCDM, and ChevallierPolarskiLinder CPL models with nonzero curvature and compare their predictions. The most successful model with the dark energy equation of state w w0 w11a22 was studied in detail. These models are interesting in possibly alleviating the Hubble constant H0 tension, but they achieved a modest success in this direction with the considered observational data.
LogicInference A New Dataset for Teaching Logical Inference to seq2seq Models ; Machine learning models such as Transformers or LSTMs struggle with tasks that are compositional in nature such as those involving reasoninginference. Although many datasets exist to evaluate compositional generalization, when it comes to evaluating inference abilities, options are more limited. This paper presents LogicInference, a new dataset to evaluate the ability of models to perform logical inference. The dataset focuses on inference using propositional logic and a small subset of firstorder logic, represented both in semiformal logical notation, as well as in natural language. We also report initial results using a collection of machine learning models to establish an initial baseline in this dataset.
Pareto Set Learning for Neural Multiobjective Combinatorial Optimization ; Multiobjective combinatorial optimization MOCO problems can be found in many realworld applications. However, exactly solving these problems would be very challenging, particularly when they are NPhard. Many handcrafted heuristic methods have been proposed to tackle different MOCO problems over the past decades. In this work, we generalize the idea of neural combinatorial optimization, and develop a learningbased approach to approximate the whole Pareto set for a given MOCO problem without further search procedure. We propose a single preferenceconditioned model to directly generate approximate Pareto solutions for any tradeoff preference, and design an efficient multiobjective reinforcement learning algorithm to train this model. Our proposed method can be treated as a learningbased extension for the widelyused decompositionbased multiobjective evolutionary algorithm MOEAD. It uses a single model to accommodate all the possible preferences, whereas other methods use a finite number of solutions to approximate the Pareto set. Experimental results show that our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiobjective vehicle routing problem, and multiobjective knapsack problem in terms of solution quality, speed, and model efficiency.
Learning Fair Models without Sensitive Attributes A Generative Approach ; Most existing fair classifiers rely on sensitive attributes to achieve fairness. However, for many scenarios, we cannot obtain sensitive attributes due to privacy and legal issues. The lack of sensitive attributes challenges many existing works. Though we lack sensitive attributes, for many applications, there usually exists features or information of various formats that are relevant to sensitive attributes. For example, a personal purchase history can reflect his or her race, which would be helpful for learning fair classifiers on race. However, the work on exploring relevant features for learning fair models without sensitive attributes is rather limited. Therefore, in this paper, we study a novel problem of learning fair models without sensitive attributes by exploring relevant features. We propose a probabilistic generative framework to effectively estimate the sensitive attribute from the training data with relevant features in various formats and utilize the estimated sensitive attribute information to learn fair models. Experimental results on realworld datasets show the effectiveness of our framework in terms of both accuracy and fairness.
Towards Infield Navigation leveraging simulated data for crop row detection ; Agricultural datasets for crop row detection are often bound by their limited number of images. This restricts the researchers from developing deep learning based models for precision agricultural tasks involving crop row detection. We suggest the utilization of small realworld datasets along with additional data generated by simulations to yield similar crop row detection performance as that of a model trained with a large real world dataset. Our method could reach the performance of a deep learning based crop row detection model trained with realworld data by using 60 less labelled realworld data. Our model performed well against field variations such as shadows, sunlight and grow stages. We introduce an automated pipeline to generate labelled images for crop row detection in simulation domain. An extensive comparison is done to analyze the contribution of simulated data towards reaching robust crop row detection in various realworld field scenarios.
Multipartite correlations in quantum collision models ; Quantum collision models have proved to be useful for a clear and concise description of many physical phenomena in the field of open quantum systems thermalization, decoherence, homogenization, nonequilibrium steady state, entanglement generation, simulation of manybody dynamics, quantum thermometry. A challenge in the standard collision model, where the system and many ancillas are all initially uncorrelated, is how to describe quantum correlations among ancillas induced by successive systemancilla interactions. Another challenge is how to deal with initially correlated ancillas. Here we develop a tensor network formalism to address both challenges. We show that the induced correlations in the standard collision model are well captured by a matrix product state a matrix product density operator if the colliding particles are in pure mixed states. In the case of the initially correlated ancillas, we construct a general tensor diagram for the system dynamics and derive a memorykernel master equation. Analyzing the perturbation series for the memory kernel, we go beyond the recent results concerning the leading role of twopoint correlations and consider multipoint correlations Waldenfelds cumulants that become relevant in the higher order stroboscopic limits. These results open an avenue for a further analysis of memory effects in the collisional quantum dynamics.
SurrogateAssisted Evolutionary Generative Design Of Breakwaters Using Deep Convolutional Networks ; In the paper, a multiobjective evolutionary surrogateassisted approach for the fast and effective generative design of coastal breakwaters is proposed. To approximate the computationally expensive objective functions, the deep convolutional neural network is used as a surrogate model. This model allows optimizing a configuration of breakwaters with a different number of structures and segments. In addition to the surrogate, an assistant model was developed to estimate the confidence of predictions. The proposed approach was tested on the synthetic water area, the SWAN model was used to calculate the wave heights. The experimental results confirm that the proposed approach allows obtaining more effective less expensive with better protective properties solutions than nonsurrogate approaches for the same time.
A Modelbased Technique for Ad Hoc Correction of Instrumental Polarization in Solar Spectropolarimetry ; We present a new approach for correcting instrumental polarization by modeling the nondepolarizing effects of a complex series of optical elements to determine physically realizable Mueller matrices. Provided that the Mueller matrix of the optical system can be decomposed into a general elliptical diattenuator and general elliptical retarder, it is possible to model the crosstalk between both the polarized and unpolarized states of the Stokes vector and then use the acquired science observations to determine the bestfit free parameters. Here, we implement a minimization for solar spectropolarimetric measurements containing photospheric spectral lines sensitive to the Zeeman effect using physical constraints provided by polarized line and continuum formation. This modelbased approach is able to provide an accurate correction even in the presence of large amounts of polarization crosstalk and conserves the physically meaningful magnitude of the Stokes vector, a significant improvement over previous ad hoc techniques.
Effective model for studying optical properties of leadhalide perovskites ; We use general symmetrybased arguments to construct an effective model suitable for studying optical properties of leadhalide perovskites. To build the model, we identify an atomiclevel interaction between electromagnetic fields and the spin degree of freedom that should be added to a minimallycoupled mathbfkcdot p Hamiltonian. As a first application, we study two basic optical characteristics of the material the Verdet constant and the refractive index. Beyond these linear characteristics of the material the model is suitable for calculating nonlinear effects such as the thirdorder optical susceptibility. Analysis of this quantity shows that the geometrical properties of the spinelectric term imply isotropic optical response of the system, and that optical anisotropy of leadhalide perovskites is a manifestation of hopping of charge carriers. To illustrate this, we discuss thirdharmonic generation.
Polling Latent Opinions A Method for Computational Sociolinguistics Using Transformer Language Models ; Text analysis of social media for sentiment, topic analysis, and other analysis depends initially on the selection of keywords and phrases that will be used to create the research corpora. However, keywords that researchers choose may occur infrequently, leading to errors that arise from using small samples. In this paper, we use the capacity for memorization, interpolation, and extrapolation of Transformer Language Models such as the GPT series to learn the linguistic behaviors of a subgroup within larger corpora of Yelp reviews. We then use promptbased queries to generate synthetic text that can be analyzed to produce insights into specific opinions held by the populations that the models were trained on. Once learned, more specific sentiment queries can be made of the model with high levels of accuracy when compared to traditional keyword searches. We show that even in cases where a specific keyphrase is limited or not present at all in the training corpora, the GPT is able to accurately generate large volumes of text that have the correct sentiment.
Explaining the W boson mass anomaly and dark matter with a U1 dark sector ; The W boson mass recently reported by the CDF collaboration shows a deviation from the standard model prediction with an excess at 7sigma level. We investigate two simple extensions of the standard model with an extra U1 dark sector. One is the U1x extension, where the U1x gauge field mixes with the standard model through gauge kinetic terms. The other is a general U1mathbfA YmathbfB q extension of the standard model. Fitting various experimental constraints we find the U1x extension with only kinetic mixing can enhance the W boson mass for at most 10MeV. While the U1mathbfA YmathbfB q extension can easily generate 77MeV enhancement of the W boson mass and also offer a viable dark matter candidate with mass ranging from several hundred GeV to TeV, which may be detected by future dark matter direct detection experiments with improved sensitivities.
Deep learningbased surrogate model for 3D patientspecific computational fluid dynamics ; Optimization and uncertainty quantification have been playing an increasingly important role in computational hemodynamics. However, existing methods based on principled modeling and classic numerical techniques have faced significant challenges, particularly when it comes to complex 3D patientspecific shapes in the real world. First, it is notoriously challenging to parameterize the input space of arbitrarily complex 3D geometries. Second, the process often involves massive forward simulations, which are extremely computationally demanding or even infeasible. We propose a novel deep learning surrogate modeling solution to address these challenges and enable rapid hemodynamic predictions. Specifically, a statistical generative model for 3D patientspecific shapes is developed based on a small set of baseline patientspecific geometries. An unsupervised shape correspondence solution is used to enable geometric morphing and scalable shape synthesis statistically. Moreover, a simulation routine is developed for automatic data generation by automatic meshing, boundary setting, simulation, and postprocessing. An efficient supervised learning solution is proposed to map the geometric inputs to the hemodynamics predictions in latent spaces. Numerical studies on aortic flows are conducted to demonstrate the effectiveness and merit of the proposed techniques.
Nonautoregressive Model for Fullline Code Completion ; Code completion tools are frequently used by software developers to accelerate software development by suggesting the following code elements. Completing a sequence of code tokens e.g., a full line of code has been proved more efficient than predicting a single token at a time. To complete the code sequence, researchers are employing AutoRegressive AR decoders to generate tokens in a lefttoright, tokenbytoken fashion. Consequently, the prediction of the next token depends on all previously generated tokens, which leads to high latency in inference. To improve the efficiency and accuracy of fullline code completion, in this paper, we propose a NonAutoRegressive NAR model for code completion boosted by a syntaxaware sampling strategy. Our experimental results on two widely used datasets suggest that our model outperforms both AR and NAR baselines on fullline code completion, and it is faster than the AR model with up to 9 times speedup.
Learning to Revise References for Faithful Summarization ; In realworld scenarios with naturally occurring datasets, reference summaries are noisy and may contain information that cannot be inferred from the source text. On large news corpora, removing low quality samples has been shown to reduce model hallucinations. Yet, for smaller, andor noisier corpora, filtering is detrimental to performance. To improve reference quality while retaining all data, we propose a new approach to selectively rewrite unsupported reference sentences to better reflect source data. We automatically generate a synthetic dataset of positive and negative revisions by corrupting supported sentences and learn to revise reference sentences with contrastive learning. The intensity of revisions is treated as a controllable attribute so that, at inference, diverse candidates can be overgeneratedthenrescored to balance faithfulness and abstraction. To test our methods, we extract noisy references from publicly available MIMICIII discharge summaries for the task of hospitalcourse summarization, and vary the data on which models are trained. According to metrics and human evaluation, models trained on revised clinical references are much more faithful, informative, and fluent than models trained on original or filtered data.
Preclassification based stochastic reducedorder model for timedependent complex system ; We propose a novel stochastic reducedorder model SROM for complex systems by combining clustering and classification strategies. Specifically, the distance and centroid of centroidal Voronoi tessellation CVT are redefined according to the optimality of proper orthogonal decomposition POD, thereby obtaining a timedependent generalized CVT, and each class can generate a set of clusterbased POD CPOD basis functions. To learn the classification mechanism of random input, the naive Bayes preclassifier and clustering results are applied. Then for a new input, the set of CPOD basis functions associated with the predicted label is used to reduce the corresponding model. Rigorous error analysis is shown, and a discussion in stochastic NavierStokes equation is given to provide a context for the application of this model. Numerical experiments verify that the accuracy of our SROM is improved compared with the standard POD method.
Graphical Residual Flows ; Graphical flows add further structure to normalizing flows by encoding nontrivial variable dependencies. Previous graphical flow models have focused primarily on a single flow direction the normalizing direction for density estimation, or the generative direction for inference. However, to use a single flow to perform tasks in both directions, the model must exhibit stable and efficient flow inversion. This work introduces graphical residual flows, a graphical flow based on invertible residual networks. Our approach to incorporating dependency information in the flow, means that we are able to calculate the Jacobian determinant of these flows exactly. Our experiments confirm that graphical residual flows provide stable and accurate inversion that is also more timeefficient than alternative flows with similar task performance. Furthermore, our model provides performance competitive with other graphical flows for both density estimation and inference tasks.
Gravitational waves from small spinup and spindown events of neutron stars ; It was recently reported that there exists a population of glitch candidates and antiglitch candidates which are effectively small spinups and spindowns of a neutron star with magnitudes smaller than those seen in typical glitches. The physical origin of these small events is not yet understood. In this paper, we outline a model that can account for the changes in spin, and crucially, is independently testable with gravitational wave observations. In brief, the model posits that small spinupspindown events are caused by the excitation and decay of nonaxisymmetric fmodes which radiate angular momentum away in a burstlike way as gravitational waves. The model takes the change in spin frequency as an input and outputs the initial mode amplitude and the signaltonoise ratio achievable from gravitational wave detectors. We find that the model presented here will become falsifiable once 3rd generation gravitational wave detectors, like the Einstein Telescope and Cosmic Explorer, begin taking data.
Accelerating MaterialsSpace Exploration for Thermal Insulators by Mapping Materials Properties via Artificial Intelligence ; Reliable artificialintelligence models have the potential to accelerate the discovery of materials with optimal properties for various applications, including superconductivity, catalysis, and thermoelectricity. Advancements in this field are often hindered by the scarcity and quality of available data and the significant effort required to acquire new data. For such applications, reliable surrogate models that help guide materials space exploration using easily accessible materials properties are urgently needed. Here, we present a general, datadriven framework that provides quantitative predictions as well as qualitative rules for steering data creation for all datasets via a combination of symbolic regression and sensitivity analysis. We demonstrate the power of the framework by generating an accurate analytic model for the lattice thermal conductivity using only 75 experimentally measured values. By extracting the most influential material properties from this model, we are then able to hierarchically screen 732 materials and find 80 ultrainsulating materials.
Analytical Model of Compact Star with a new version of Modified Chaplygin Equation of State ; In this paper we found a new model for compact star with anisotropic matter distribution considering the new version of Chaplygin fluid equation of state of Errehymy and Daoud 2021. We specify the particular form of the metric potential proposed for Thirukanesh and Ragel 2012 and generalized for Malaver 2014 in order to integrate the Einsteins field equations. The obtained model satisfies all physical properties expected in a realistic star. The radial pressure, energy density, metric coefficients, anisotropy and mass are well defined and are regular in the stellar interior. The results of this research can be useful in the development and description of new models of compact structures.
Persistent homology analysis of a generalized AubryAndreHarper model ; Observing critical phases in lattice models is challenging due to the need to analyze the finite time or size scaling of observables. We study how the computational topology technique of persistent homology can be used to characterize phases of a generalized AubryAndr'eHarper model. The persistent entropy and mean squared lifetime of features obtained using persistent homology behave similarly to conventional measures Shannon entropy and inverse participation ratio and can distinguish localized, extended, and crticial phases. However, we find that the persistent entropy also clearly distinguishes ordered from disordered regimes of the model. The persistent homology approach can be applied to both the energy eigenstates and the wavepacket propagation dynamics.
Sequence Learning and Consolidation on Loihi using Onchip Plasticity ; In this work we develop a model of predictive learning on neuromorphic hardware. Our model uses the onchip plasticity capabilities of the Loihi chip to remember observed sequences of events and use this memory to generate predictions of future events in real time. Given the locality constraints of onchip plasticity rules, generating predictions without interfering with the ongoing learning process is nontrivial. We address this challenge with a memory consolidation approach inspired by hippocampal replay. Sequence memory is stored in an initial memory module using spiketiming dependent plasticity. Later, during an offline period, memories are consolidated into a distinct prediction module. This second module is then able to represent predicted future events without interfering with the activity, and plasticity, in the first module, enabling online comparison between predictions and groundtruth observations. Our model serves as a proofofconcept that online predictive learning models can be deployed on neuromorphic hardware with onchip plasticity.
FewShot Musical Source Separation ; Deep learningbased approaches to musical source separation are often limited to the instrument classes that the models are trained on and do not generalize to separate unseen instruments. To address this, we propose a fewshot musical source separation paradigm. We condition a generic UNet source separation model using few audio examples of the target instrument. We train a fewshot conditioning encoder jointly with the UNet to encode the audio examples into a conditioning vector to configure the UNet via featurewise linear modulation FiLM. We evaluate the trained models on real musical recordings in the MUSDB18 and MedleyDB datasets. We show that our proposed fewshot conditioning paradigm outperforms the baseline onehot instrumentclass conditioned model for both seen and unseen instruments. To extend the scope of our approach to a wider variety of realworld scenarios, we also experiment with different conditioning example characteristics, including examples from different recordings, with multiple sources, or negative conditioning examples.
Effective Field Theory Islands from Perturbative and Nonperturbative FourGraviton Amplitudes ; Theoretical data obtained from physically sensible field and string theory models suggest that gravitational Effective Field Theories EFTs live on islands that are tiny compared to current general bounds determined from unitarity, causality, crossing symmetry, and a good highenergy behavior. In this work, we present explicit perturbative and nonperturbative 2 to 2 graviton scattering amplitudes and their associated lowenergy expansion in spacetime dimensions Dgeq 4 to support this notion. Our new results include a first nonperturbative example consisting of a D4, mathcalN1 supersymmetric field theory that is coupled weakly to gravity. We show that this nonperturbative model lies on the same islands identified using fourdimensional perturbative models based on string theory and minimallycoupled matter circulating a loop. Furthermore, we generalize the previous fourdimensional perturbative models based on string theory and minimallycoupled massive spin0 and spin1 states circulating in the loop to D dimensions. Remarkably, we again find that the lowenergy EFT coefficients lie on small islands. These results offer a useful guide towards constraining possible extensions of Einstein gravity.
OCR Synthetic Benchmark Dataset for Indic Languages ; We present the largest publicly available synthetic OCR benchmark dataset for Indic languages. The collection contains a total of 90k images and their ground truth for 23 Indic languages. OCR model validation in Indic languages require a good amount of diverse data to be processed in order to create a robust and reliable model. Generating such a huge amount of data would be difficult otherwise but with synthetic data, it becomes far easier. It can be of great importance to fields like Computer Vision or Image Processing where once an initial synthetic data is developed, model creation becomes easier. Generating synthetic data comes with the flexibility to adjust its nature and environment as and when required in order to improve the performance of the model. Accuracy for labeled realtime data is sometimes quite expensive while accuracy for synthetic data can be easily achieved with a good score.
A Highly Adaptive Acoustic Model for Accurate MultiDialect Speech Recognition ; Despite the success of deep learning in speech recognition, multidialect speech recognition remains a difficult problem. Although dialectspecific acoustic models are known to perform well in general, they are not easy to maintain when dialectspecific data is scarce and the number of dialects for each language is large. Therefore, a single unified acoustic model AM that generalizes well for many dialects has been in demand. In this paper, we propose a novel acoustic modeling technique for accurate multidialect speech recognition with a single AM. Our proposed AM is dynamically adapted based on both dialect information and its internal representation, which results in a highly adaptive AM for handling multiple dialects simultaneously. We also propose a simple but effective training method to deal with unseen dialects. The experimental results on large scale speech datasets show that the proposed AM outperforms all the previous ones, reducing word error rates WERs by 8.11 relative compared to a single alldialects AM and by 7.31 relative compared to dialectspecific AMs.
Particle dispersion in the classical vector dark matter background ; Interactions with a background medium modify in general the dispersion relation and canonical normalization of propagating particles. This can have an important phenomenological consequence when considering light dark matter coupling to quarks and leptons. In this paper, we address this issue in the vector dark matter background with the randomly distributed polarizations or a fixed polarization to the single direction. The observations associated with particle dispersion can give constraints on new light Abelian gauge boson models. Considering the solar neutrino transition and the electron mass measurement, stringent bounds can be put on the gauged Lmu Ltau model and the dark photon model. Moreover, the classical vector field turns out to induce drastic changes in the particle normalization, which rule out a significant parameter region of the generic vector dark matter model.
ProQA Structural Promptbased Pretraining for Unified Question Answering ; Question Answering QA is a longstanding challenge in natural language processing. Existing QA works mostly focus on specific question types, knowledge domains, or reasoning skills. The specialty in QA research hinders systems from modeling commonalities between tasks and generalization for wider applications. To address this issue, we present ProQA, a unified QA paradigm that solves various tasks through a single model. ProQA takes a unified structural prompt as the bridge and improves the QAcentric ability by structural promptbased pretraining. Through a structurally designed promptbased input schema, ProQA concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task. Furthermore, ProQA is pretrained with structural promptformatted largescale synthesized corpus, which empowers the model with the commonlyrequired QA ability. Experimental results on 11 QA benchmarks demonstrate that ProQA consistently boosts performance on both full data finetuning, fewshot learning, and zeroshot testing scenarios. Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.
Nested Zero Inflated Generalized Poisson Regression for FIFA World Cup 2022 ; This article is devoted to the forecast of the FIFA World Cup 2022 via nested zeroinflated generalized Poisson regression. Our regression model incorporates the Elo points of the participating teams, the location of the matches and the of teamspecific skills in attack and defense as covariates. The proposed model allows predictions in terms of probabilities in order to quantify the chances for each team to reach a certain stage of the tournament. We use Monte Carlo simulations for estimating the outcome of each single match of the tournament, from which we are able to simulate the whole tournament itself. The model is fitted on all football games of the participating teams since 2016 weighted by date and importance. Validation with previous tournaments and comparison with other Poisson models are given.
A Dataset and BERTbased Models for Targeted Sentiment Analysis on Turkish Texts ; Targeted Sentiment Analysis aims to extract sentiment towards a particular target from a given text. It is a field that is attracting attention due to the increasing accessibility of the Internet, which leads people to generate an enormous amount of data. Sentiment analysis, which in general requires annotated data for training, is a wellresearched area for widely studied languages such as English. For lowresource languages such as Turkish, there is a lack of such annotated data. We present an annotated Turkish dataset suitable for targeted sentiment analysis. We also propose BERTbased models with different architectures to accomplish the task of targeted sentiment analysis. The results demonstrate that the proposed models outperform the traditional sentiment analysis models for the targeted sentiment analysis task.
Generalized Modeling and Fundamental Limits for MultipleAccess Integrated Sensing and Communication Systems ; In this paper, we propose a generalized statedependent channel modeling and present fundamental limits for multipleaccess integrated sensing and communication ISAC systems. The model proposed extends the latest studies by Kobayashi et al. and Ahmadipour et al., which explicitly accounts for more practical scenarios with correlated sensing and channel states, and imperfect channel state information at the receiver CSIR. For the model considered, we devise an achievable scheme that combines message cooperation and joint compression of past transmitted codeword and echo signals a form of strictly causal feedback via distributed WynerZiv coding at each user to realize simultaneously cooperative communication and sensing. The corresponding achievable ratedistortion region is derived and a numerical example is constructed to illustrate the potential gain of the proposed scheme. It is found that the compressed information sent is not only useful for further enhancing communication particularly in the case without CSIR, but also helpful in improving the sensing performance of the transmitters.
Notes on Spinors and Polyforms II Quaternions and Octonions ; Pauli matrices are 2x2 tracefree matrices with a real diagonal and complex complexconjugate offdiagonal entries. They generate the Clifford algebra Cl3. They can be generalised by replacing the offdiagonal complex number by one taking values in either quaternions or octonions or their split versions. These quaternionic and octonionic generalisations generate wellknown models of Cl5 and Cl9 respectively. The main aim of the paper is to explicitly relate these models to the models arising via the creationannihilation operator construction. We describe in details the models related to quaternions and octonions, as well as to the split quaternions and octonions. In particular, we record the description of the possible types of Weyl spinors of Spin4,4, which does not seem to have appeared in the literature.
ViT5 Pretrained TexttoText Transformer for Vietnamese Language Generation ; We present ViT5, a pretrained Transformerbased encoderdecoder model for the Vietnamese language. With T5style selfsupervised pretraining, ViT5 is trained on a large corpus of highquality and diverse Vietnamese texts. We benchmark ViT5 on two downstream text generation tasks, Abstractive Text Summarization and Named Entity Recognition. Although Abstractive Text Summarization has been widely studied for the English language thanks to its rich and large source of data, there has been minimal research into the same task in Vietnamese, a much lower resource language. In this work, we perform exhaustive experiments on both Vietnamese Abstractive Summarization and Named Entity Recognition, validating the performance of ViT5 against many other pretrained Transformerbased encoderdecoder models. Our experiments show that ViT5 significantly outperforms existing models and achieves stateoftheart results on Vietnamese Text Summarization. On the task of Named Entity Recognition, ViT5 is competitive against previous best results from pretrained encoderbased Transformer models. Further analysis shows the importance of context length during the selfsupervised pretraining on downstream performance across different settings.
The Effectiveness of Temporal Dependency in Deepfake Video Detection ; Deepfakes are a form of synthetic image generation used to generate fake videos of individuals for malicious purposes. The resulting videos may be used to spread misinformation, reduce trust in media, or as a form of blackmail. These threats necessitate automated methods of deepfake video detection. This paper investigates whether temporal information can improve the deepfake detection performance of deep learning models. To investigate this, we propose a framework that classifies new and existing approaches by their defining characteristics. These are the types of feature extraction automatic or manual, and the temporal relationship between frames dependent or independent. We apply this framework to investigate the effect of temporal dependency on a model's deepfake detection performance. We find that temporal dependency produces a statistically significant p 0.05 increase in performance in classifying real images for the model using automatic feature selection, demonstrating that spatiotemporal information can increase the performance of deepfake video detection models.
Packet Switching in Quantum Networks A Path to Quantum Internet ; Largescale quantum networks with thousands of nodes require scalable network protocols and physical hardware to realize. In this work, we introduce packet switching as a new paradigm for quantum data transmission in both future and nearterm quantum networks. We propose a classicalquantum data frame structure and explore methods of frame generation and processing. Further, we present conceptual designs for a quantum reconfigurable optical adddrop multiplexer to realize the proposed transmission scheme. Packet switching allows for a universal design for a next generation Internet where classical and quantum data share the same network protocols and infrastructure. In this new quantum networking paradigm, entanglement distribution, as with quantum key distribution, is an application built on top of the quantum network rather than as a network designed especially for those purposes. For analysis of the network model, we simulate the feasibility of quantum packet switching for some preliminary models of quantum key and entanglement distribution. Finally, we discuss how our model can be integrated with other network models toward a realization of a quantum Internet.
Tree level Majorana neutrino mass from Type1 times Type2 Seesaw mechanism with Dark Matter ; We propose a type of hybrid Seesaw model that combines Type1 and Type2 Seesaw mechanism in multiplicative way to generate tree level Majorana neutrino mass and provides a Dark Matter candidate. The model extends the Standard Model by extra gauge symmetry U1D and hidden sector consisted of chiral fermions and additional scalar fields. After spontaneous symmetry breaking, light neutrino masses are generated not only by exchange of the new heavy fermions as Type1 Seesaw, but also by coupling to the naturally small induced vacuum expectation value of new heavy scalar as Type2 Seesaw. An unbroken residue of U1D protects the lightest Dirac fermion required by anomaly cancellation in hidden sector from decaying, therefore giving rise to a Dark Matter candidate. Due to strong enough Seesaw suppression from our hybridization, new physics scale can be as low as TeV in this model and discovering signal from LHC data is possible in near future.
Topologydependence of propagation mechanisms in the production network ; The topology of production networks determines the propagation mechanisms of local shocks and thus the comovement of industries. As a result, we need a more precisely defined production network to model economic growth accurately. In this study, we analyse Leontief's inputoutput model from a network theory perspective, aiming to construct a production network in such a way that it allows the most accurate modelling of the propagation mechanisms of changes that generate industry growth. We do this by revisiting a prevalent threshold in the literature that determines industryindustry interdependence. Our hypothesis is that changing the threshold changes the topological structure of the network and the core industries to a large extent. This is significant, because if the production network topology is not precisely defined, the resulting internal propagation mechanisms will be distorted, and thus industry growth modelling will not be accurate. We prove our hypothesis by examining the network topology, and centrality metrics under different thresholds on a network derived from the US inputoutput accounts data for 2007 and 2012.
Modeling Exemplification in Longform Question Answering via Retrieval ; Exemplification is a process by which writers explain or clarify a concept by providing an example. While common in all forms of writing, exemplification is particularly useful in the task of longform question answering LFQA, where a complicated answer can be made more understandable through simple examples. In this paper, we provide the first computational study of exemplification in QA, performing a finegrained annotation of different types of examples e.g., hypotheticals, anecdotes in three corpora. We show that not only do stateoftheart LFQA models struggle to generate relevant examples, but also that standard evaluation metrics such as ROUGE are insufficient to judge exemplification quality. We propose to treat exemplification as a emphretrieval problem in which a partiallywritten answer is used to query a large set of humanwritten examples extracted from a corpus. Our approach allows a reliable rankingtype automatic metrics that correlates well with human evaluation. A human evaluation shows that our model's retrieved examples are more relevant than examples generated from a stateoftheart LFQA model.
Negative ZeroPointEnergy Parameter in the MeyerMiller Mapping Model for Nonadiabatic Dynamics ; The celebrated MeyerMiller mapping model has been a useful approach for generating practical trajectorybased nonadiabatic dynamics methods. It is generally assumed that the zeropointenergy ZPE parameter is positive. The constraint implied in the conventional MeyerMiller mapping Hamiltonian for an Felectronicstate system actually requires that parameter gamma is larger than 1F for the ZPE parameter for each electronic degree of freedom. Both negative and positive values are possible for such a parameter. We first establish a rigorous formulation to construct exact mapping models in the Cartesian phase space when the constraint is applied. When nuclear dynamics is approximated by the linearized semiclassical initial value representation, a negative ZPE parameter could lead to reasonably good performance in describing dynamic behaviors in typical spinboson models for condensedphase twostate systems, even at challenging zero temperature.
EdiT5 SemiAutoregressive TextEditing with T5 WarmStart ; We present EdiT5 a novel semiautoregressive textediting model designed to combine the strengths of nonautoregressive textediting and autoregressive decoding. EdiT5 is faster during inference than conventional sequencetosequence seq2seq models, while being capable of modelling flexible inputoutput transformations. This is achieved by decomposing the generation process into three subtasks 1 tagging to decide on the subset of input tokens to be preserved in the output, 2 reordering to define their order in the output text, and 3 insertion to infill the missing tokens that are not present in the input. The tagging and reordering steps, which are responsible for generating the largest portion of the output, are nonautoregressive, while the insertion step uses an autoregressive decoder. Depending on the task, EdiT5 on average requires significantly fewer autoregressive steps, demonstrating speedups of up to 25x when compared to seq2seq models. Qualitywise, EdiT5 is initialized with a pretrained T5 checkpoint yielding comparable performance to T5 in highresource settings when evaluated on three NLG tasks Sentence Fusion, Grammatical Error Correction, and Decontextualization while clearly outperforming T5 in lowresource settings.
Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI ; Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors. In this work, which focuses on the NLI task, we introduce the methodology of FaithfulnessthroughCounterfactuals, which first generates a counterfactual hypothesis based on the logical predicates expressed in the explanation, and then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic i.e. if the new formula is textitlogically satisfiable. In contrast to existing approaches, this does not require any explanations for training a separate verification model. We first validate the efficacy of automatic counterfactual hypothesis generation, leveraging on the fewshot priming paradigm. Next, we show that our proposed metric distinguishes between humanmodel agreement and disagreement on new counterfactual input. In addition, we conduct a sensitivity analysis to validate that our metric is sensitive to unfaithful explanations.
Low Resource Style Transfer via Domain Adaptive Meta Learning ; Text style transfer TST without parallel data has achieved some practical success. However, most of the existing unsupervised text style transfer methods suffer from i requiring massive amounts of nonparallel data to guide transferring different text styles. ii colossal performance degradation when finetuning the model in new domains. In this work, we propose DAMLATM Domain Adaptive MetaLearning with Adversarial Transfer Model, which consists of two parts DAML and ATM. DAML is a domain adaptive metalearning approach to learn general knowledge in multiple heterogeneous source domains, capable of adapting to new unseen domains with a small amount of data. Moreover, we propose a new unsupervised TST approach Adversarial Transfer Model ATM, composed of a sequencetosequence pretrained language model and uses adversarial style training for better content preservation and style transfer. Results on multidomain datasets demonstrate that our approach generalizes well on unseen lowresource domains, achieving stateoftheart results against ten strong baselines.
Theory for constructing effective models for electrons in generic bilayer graphene ; We present and discuss in detail practical techniques in formulating effective models to describe the dynamics of lowenergy electrons in generic bilayer graphene. Starting from a tightbinding model using the pz orbital of carbon atoms as a representation basis set, we reformulate it into the problem of coupling between Bloch states defined in each graphene layer. This approach allows transferring the original problem into the determination of Bloch states in two independent material layers and coupling rules of such states. We show two schemes to parameterize coupled Bloch state vectors. For the bilayer graphene configurations of small twist angle in which the long wavelength approximation is applicable, we show that an effective Hamiltonian can be written in the canonical form of a kinetic term defined by the momentum operator and a potential term defined by the position operator. The validity of effective models of different sophistication levels and their potential application in treating various physical aspects are numerically discussed.
Understanding Programmatic Weak Supervision via Sourceaware Influence Function ; Programmatic Weak Supervision PWS aggregates the source votes of multiple weak supervision sources into probabilistic training labels, which are in turn used to train an end model. With its increasing popularity, it is critical to have some tool for users to understand the influence of each component e.g., the source vote or training data in the pipeline and interpret the end model behavior. To achieve this, we build on Influence Function IF and propose sourceaware IF, which leverages the generation process of the probabilistic labels to decompose the end model's training objective and then calculate the influence associated with each data, source, class tuple. These primitive influence score can then be used to estimate the influence of individual component of PWS, such as source vote, supervision source, and training data. On datasets of diverse domains, we demonstrate multiple use cases 1 interpreting incorrect predictions from multiple angles that reveals insights for debugging the PWS pipeline, 2 identifying mislabeling of sources with a gain of 937 over baselines, and 3 improving the end model's generalization performance by removing harmful components in the training objective 1324 better than ordinary IF.
Identifying PatientSpecific Root Causes with the Heteroscedastic Noise Model ; Complex diseases are caused by a multitude of factors that may differ between patients even within the same diagnostic category. A few underlying root causes may nevertheless initiate the development of disease within each patient. We therefore focus on identifying patientspecific root causes of disease, which we equate to the samplespecific predictivity of the exogenous error terms in a structural equation model. We generalize from the linear setting to the heteroscedastic noise model where Y mX varepsilonsigmaX with nonlinear functions mX and sigmaX representing the conditional mean and mean absolute deviation, respectively. This model preserves identifiability but introduces nontrivial challenges that require a customized algorithm called Generalized Root Causal Inference GRCI to extract the error terms correctly. GRCI recovers patientspecific root causes more accurately than existing alternatives.
Standalone Neural ODEs with Sensitivity Analysis ; This paper presents the Standalone Neural ODE sNODE, a continuousdepth neural ODE model capable of describing a full deep neural network. This uses a novel nonlinear conjugate gradient NCG descent optimization scheme for training, where the Sobolev gradient can be incorporated to improve smoothness of model weights. We also present a general formulation of the neural sensitivity problem and show how it is used in the NCG training. The sensitivity analysis provides a reliable measure of uncertainty propagation throughout a network, and can be used to study model robustness and to generate adversarial attacks. Our evaluations demonstrate that our novel formulations lead to increased robustness and performance as compared to ResNet models, and that it opens up for new opportunities for designing and developing machine learning with improved explainability.
KLEntropyRegularized RL with a Generative Model is Minimax Optimal ; In this work, we consider and analyze the sample complexity of modelfree reinforcement learning with a generative model. Particularly, we analyze mirror descent value iteration MDVI by Geist et al. 2019 and Vieillard et al. 2020a, which uses the KullbackLeibler divergence and entropy regularization in its value and policy updates. Our analysis shows that it is nearly minimaxoptimal for finding an varepsilonoptimal policy when varepsilon is sufficiently small. This is the first theoretical result that demonstrates that a simple modelfree algorithm without variancereduction can be nearly minimaxoptimal under the considered setting.
Going Beyond OneHot Encoding in Classification Can Human Uncertainty Improve Model Performance ; Technological and computational advances continuously drive forward the broad field of deep learning. In recent years, the derivation of quantities describing theuncertainty in the prediction which naturally accompanies the modeling process has sparked general interest in the deep learning community. Often neglected in the machine learning setting is the human uncertainty that influences numerous labeling processes. As the core of this work, label uncertainty is explicitly embedded into the training process via distributional labels. We demonstrate the effectiveness of our approach on image classification with a remote sensing data set that contains multiple label votes by domain experts for each image The incorporation of label uncertainty helps the model to generalize better to unseen data and increases model performance. Similar to existing calibration methods, the distributional labels lead to bettercalibrated probabilities, which in turn yield more certain and trustworthy predictions.
Attention Flows for General Transformers ; In this paper, we study the computation of how much an input token in a Transformer model influences its prediction. We formalize a method to construct a flow network out of the attention values of encoderonly Transformer models and extend it to general Transformer architectures including an autoregressive decoder. We show that running a maxflow algorithm on the flow network construction yields Shapley values, which determine the impact of a player in cooperative game theory. By interpreting the input tokens in the flow network as players, we can compute their influence on the total attention flow leading to the decoder's decision. Additionally, we provide a library that computes and visualizes the attention flow of arbitrary Transformer models. We show the usefulness of our implementation on various models trained on natural language processing and reasoning tasks.
VLBEiT Generative VisionLanguage Pretraining ; We introduce a visionlanguage foundation model called VLBEiT, which is a bidirectional multimodal Transformer learned by generative pretraining. Our minimalist solution conducts masked prediction on both monomodal and multimodal data with a shared Transformer. Specifically, we perform masked visionlanguage modeling on imagetext pairs, masked language modeling on texts, and masked image modeling on images. VLBEiT is learned from scratch with one unified pretraining task, one shared backbone, and onestage training. Our method is conceptually simple and empirically effective. Experimental results show that VLBEiT obtains strong results on various visionlanguage benchmarks, such as visual question answering, visual reasoning, and imagetext retrieval. Moreover, our method learns transferable visual features, achieving competitive performance on image classification, and semantic segmentation.
Disentangling Epistemic and Aleatoric Uncertainty in Reinforcement Learning ; Characterizing aleatoric and epistemic uncertainty on the predicted rewards can help in building reliable reinforcement learning RL systems. Aleatoric uncertainty results from the irreducible environment stochasticity leading to inherently risky states and actions. Epistemic uncertainty results from the limited information accumulated during learning to make informed decisions. Characterizing aleatoric and epistemic uncertainty can be used to speed up learning in a training environment, improve generalization to similar testing environments, and flag unfamiliar behavior in anomalous testing environments. In this work, we introduce a framework for disentangling aleatoric and epistemic uncertainty in RL. 1 We first define four desiderata that capture the desired behavior for aleatoric and epistemic uncertainty estimation in RL at both training and testing time. 2 We then present four RL models inspired by supervised learning i.e. Monte Carlo dropout, ensemble, deep kernel learning models, and evidential networks to instantiate aleatoric and epistemic uncertainty. Finally, 3 we propose a practical evaluation method to evaluate uncertainty estimation in modelfree RL based on detection of outofdistribution environments and generalization to perturbed environments. We present theoretical and experimental evidence to validate that carefully equipping modelfree RL agents with supervised learning uncertainty methods can fulfill our desiderata.
An effective fluid description of scalarvectortensor theories under the subhorizon and quasistatic approximations ; We consider scalarvectortensor SVT theories with secondorder equations of motion and tensor propagation speed equivalent to the speed of light. Under the subhorizon and the quasistatic approximations we find analytical formulae for an effective dark energy fluid, i.e., sound speed, anisotropic stress as well as energy density and pressure. We took advantage of our general, analytical fluid description and showed that it is possible to design SVT cosmological models which are degenerate with LambdaCDM at the background level while having gravity strength Grm effGrm N at latetimes as well as nonvanishing dark energy perturbations. We implemented SVT designer models in the widely used Boltzmann solver CLASS thus making it possible to test SVT models against astrophysical observations. Our effective fluid approach to SVT models reveals non trivial behaviour in the sound speed and the anisotropic stress well worth an investigation in light of current discrepancies in cosmological parameters such as H0 and sigma8.
A comprehensive continuum theory of structured liquids ; We develop a comprehensive continuum model capable of treating both electrostatic and structural interactions in liquid dielectrics. Starting from a twoorder parameter description in terms of charge density and polarization, we derive a fieldtheoretic model generalizing previous theories. Our theory explicitly includes electrostatic and structural interactions in the bulk of the liquid and allows for polarization charges within a Drude model. In particular, we develop a detailed description of the boundary conditions which include the charge regulation mechanism and surface polarization. The general features for solving the saddlepoint equations of our model are elucidated and future applications to predict and validate experimental results are outlined.
Calibrating cardiac electrophysiology models using latent Gaussian processes on atrial manifolds ; Models of electrical excitation and recovery in the heart have become increasingly detailed, but have yet to be used routinely in the clinical setting to guide personalized intervention in patients. One of the main challenges is calibrating models from the limited measurements that can be made in a patient during a standard clinical procedure. In this work, we propose a novel framework for the probabilistic calibration of electrophysiology parameters on the left atrium of the heart using local measurements of cardiac excitability. Parameter fields are represented as Gaussian processes on manifolds and are linked to measurements via surrogate functions that map from local parameter values to measurements. The posterior distribution of parameter fields is then obtained. We show that our method can recover parameter fields used to generate localised synthetic measurements of effective refractory period. Our methodology is applicable to other measurement types collected with clinical protocols, and more generally for calibration where model parameters vary over a manifold.
JNMR Joint Nonlinear Motion Regression for Video Frame Interpolation ; Video frame interpolation VFI aims to generate predictive frames by warping learnable motions from the bidirectional historical references. Most existing works utilize spatiotemporal semantic information extractor to realize motion estimation and interpolation modeling. However, they insufficiently consider the real mechanistic rationality of generated middle motions. In this paper, we reformulate VFI as a Joint Nonlinear Motion Regression JNMR strategy to model the complicated motions of interframe. Specifically, the motion trajectory between the target frame and the multiple reference frames is regressed by a temporal concatenation of multistage quadratic models. ConvLSTM is adopted to construct this joint distribution of complete motions in temporal dimension. Moreover, the feature learning network is designed to optimize for the joint regression modeling. A coarsetofine synthesis enhancement module is also conducted to learn visual dynamics at different resolutions through repetitive regression and interpolation. Experimental results on VFI show that the effectiveness and significant improvement of joint motion regression compared with the stateoftheart methods. The code is available at httpsgithub.comruhig6JNMR.
A coupled generalized threeform dark energy model ; A coupled dark energy model is considered, in which dark energy is represented by a generalized threeform field and dark matter by dust. By assuming the functions N and I in the model's Lagrangian as two powerlaw functions of the threeform field, we obtain two fixed points of the autonomous system of evolution equations, consisting of a attractor and a tracking saddle point which can be used to alleviate the coincidence problem. After marginalizing the present threeform field kappa X0 which is unable to be strictly restricted, we confront the model with the latest Type Ia Supernova SN uppercaseexpandafterromannumeral1a, Baryon Acoustic Oscillations BAO and Cosmic Microwave Backround CMB radiation observations with the fitting results Omegam0 0.2800.0480.048 and lambda0.0110.0320.032 in the 2sigma confidence level, we also find that the best fitting effective dark energy equation of state EOS crosses 1 at redshift around 0.2.
PAVI PlateAmortized Variational Inference ; Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining the distribution of a model's latent parameters that could have yielded the data. This task is challenging for large population studies where thousands of measurements are performed over a cohort of hundreds of subjects, resulting in a massive latent parameter space. This large cardinality renders offtheshelf Variational Inference VI computationally impractical. In this work, we design structured VI families that can efficiently tackle large population studies. To this end, our main idea is to share the parameterization and learning across the different i.i.d. variables in a generative model symbolized by the model's plates. We name this concept plate amortization, and illustrate the powerful synergies it entitles, resulting in expressive, parsimoniously parameterized and orders of magnitude faster to train large scale hierarchical variational distributions. We illustrate the practical utility of PAVI through a challenging Neuroimaging example featuring a million latent parameters, demonstrating a significant step towards scalable and expressive Variational Inference.
Phenomenology of an inhost model of hepatitis C ; This paper carries out an analysis of the global properties of solutions of an inhost model of hepatitis C for general values of its parameters. A previously unknown stable steady state on the boundary of the positive orthant is exhibited. It is proved that the model exhibits Hopf bifurcations and hence periodic solutions. A general parametrization of positive steady states is given and it is determined when the number of steady states is odd or even, according to the value of a certain basic reproductive ratio. This implies, in particular, that when this reproductive ratio is greater than one there always exists at least one positive steady state. A positive steady state which bifurcates from an infectionfree state when the reproductive ratio passes through one is always stable, i.e. no backward bifurcation occurs in this model.
Security of Machine LearningBased Anomaly Detection in Cyber Physical Systems ; In this study, we focus on the impact of adversarial attacks on deep learningbased anomaly detection in CPS networks and implement a mitigation approach against the attack by retraining models using adversarial samples. We use the BotIoT and Modbus IoT datasets to represent the two CPS networks. We train deep learning models and generate adversarial samples using these datasets. These datasets are captured from IoT and Industrial IoT IIoT networks. They both provide samples of normal and attack activities. The deep learning model trained with these datasets showed high accuracy in detecting attacks. An Artificial Neural Network ANN is adopted with one input layer, four intermediate layers, and one output layer. The output layer has two nodes representing the binary classification results. To generate adversarial samples for the experiment, we used a function called the fastgradientmethod' from the Cleverhans library. The experimental result demonstrates the influence of FGSM adversarial samples on the accuracy of the predictions and proves the effectiveness of using the retrained model to defend against adversarial attacks.
Evaluating Graph Generative Models with Contrastively Learned Features ; A wide range of models have been proposed for Graph Generative Models, necessitating effective methods to evaluate their quality. So far, most techniques use either traditional metrics based on subgraph counting, or the representations of randomly initialized Graph Neural Networks GNNs. We propose using representations from contrastively trained GNNs, rather than random GNNs, and show this gives more reliable evaluation metrics. Neither traditional approaches nor GNNbased approaches dominate the other, however we give examples of graphs that each approach is unable to distinguish. We demonstrate that Graph Substructure Networks GSNs, which in a way combine both approaches, are better at distinguishing the distances between graph datasets.
Global Convergence of Federated Learning for Mixed Regression ; This paper studies the problem of model training under Federated Learning when clients exhibit cluster structure. We contextualize this problem in mixed regression, where each client has limited local data generated from one of k unknown regression models. We design an algorithm that achieves global convergence from any initialization, and works even when local data volume is highly unbalanced there could exist clients that contain O1 data points only. Our algorithm first runs moment descent on a few anchor clients each with tildeOmegak data points to obtain coarse model estimates. Then each client alternately estimates its cluster labels and refines the model estimates based on FedAvg or FedProx. A key innovation in our analysis is a uniform estimate on the clustering errors, which we prove by bounding the VC dimension of general polynomial concept classes based on the theory of algebraic geometry.
A Deep Generative Model of Neonatal Cortical Surface Development ; The neonatal cortical surface is known to be affected by preterm birth, and the subsequent changes to cortical organisation have been associated with poorer neurodevelopmental outcomes. Deep Generative models have the potential to lead to clinically interpretable models of disease, but developing these on the cortical surface is challenging since established techniques for learning convolutional filters are inappropriate on nonflat topologies. To close this gap, we implement a surfacebased CycleGAN using mixture model CNNs MoNet to translate sphericalised neonatal cortical surface features curvature and T1wT2w cortical myelin between different stages of cortical maturity. Results show our method is able to reliably predict changes in individual patterns of cortical organisation at later stages of gestation, validated by comparison to longitudinal data; and translate appearance between preterm and term gestation 37 weeks gestation, validated through comparison with a trained termpreterm classifier. Simulated differences in cortical maturation are consistent with observations in the literature.
Mass matrices with CP phase in modular flavor symmetry ; We study the CP violation and the CP phase of quark mass matrices in modular flavor symmetric models. The CP symmetry remains at tau e2 pi i3 by a combination of the Tsymmetry of the modular symmetry. However, the Tsymmetry breaking may lead to the CP violation at the fixed point tau e2 pi i3. We study such a possibility in magnetized orbifold models as examples of modular flavor symmetric models. These models, in general, have more than one candidates for Higgs modes, while generic string compactifications also lead to several Higgs modes. These Higgs modes have different behaviors under the Ttransformation. The light Higgs mode can be a linear combination of those modes so as to lead to realistic quark mass matrices. The CP phase of mass matrix does not appear in a certain case, which is determined by the Ttransformation behavior. Deviation from it is important to realize the physical CP phase. We discuss an example leading to nonvanishing CP phase at the fixed point tau e2 pi i3.
Bayesian nonconjugate regression via variational belief updating ; We present an efficient semiparametric variational method to approximate the Gibbs posterior distribution of Bayesian regression models, which predict the data through a linear combination of the available covariates. Remarkable cases are generalized linear mixed models, support vector machines, quantile and expectile regression. The variational optimization algorithm we propose only involves the calculation of univariate numerical integrals, when no analytic solutions are available. Neither differentiability, nor conjugacy, nor elaborate dataaugmentation strategies are required. Several generalizations of the proposed approach are discussed in order to account for additive models, shrinkage priors, dynamic and spatial models, providing a unifying framework for statistical learning that cover a wide range of applications. The properties of our semiparametric variational approximation are then assessed through a theoretical analysis and an extensive simulation study, in which we compare our proposal with Markov chain Monte Carlo, conjugate mean field variational Bayes and Laplace approximation in terms of signal reconstruction, posterior approximation accuracy and execution time. A real data example is then presented through a probabilistic load forecasting application on the US power load consumption data.
A Multivariate Point Process Model for Simultaneously Recorded Neural Spike Trains ; The current stateoftheart in neurophysiological data collection allows for simultaneous recording of tens to hundreds of neurons, for which point processes are an appropriate statistical modelling framework. However, existing point process models lack multivariate generalizations which are both flexible and computationally tractable. This paper introduces a multivariate generalization of the Skellam process with resetting SPR, a point process tailored to model individual neural spike trains. The multivariate SPR MSPR is biologically justified as it mimics the process of neural integration. Its flexible dependence structure and a fast parameter estimation method make it wellsuited for the analysis of simultaneously recorded spike trains from multiple neurons. The strengths and weaknesses of the MSPR are demonstrated through simulation and analysis of experimental data.
Low Resource Pipeline for Spoken Language Understanding via Weak Supervision ; In Weak Supervised Learning WSL, a model is trained over noisy labels obtained from semantic rules and taskspecific pretrained models. Rules offer limited generalization over tasks and require significant manual efforts while pretrained models are available only for limited tasks. In this work, we propose to utilize promptbased methods as weak sources to obtain the noisy labels on unannotated data. We show that taskagnostic prompts are generalizable and can be used to obtain noisy labels for different Spoken Language Understanding SLU tasks such as sentiment classification, disfluency detection and emotion classification. These prompts could additionally be updated to add taskspecific contexts, thus providing flexibility to design taskspecific prompts. We demonstrate that promptbased methods generate reliable labels for the above SLU tasks and thus can be used as a universal weak source to train a weaksupervised model WSM in absence of labeled data. Our proposed WSL pipeline trained over promptbased weak source outperforms other competitive lowresource benchmarks on zero and fewshot learning by more than 4 on MacroF1 on all of the three benchmark SLU datasets. The proposed method also outperforms a conventional rule based WSL pipeline by more than 5 on MacroF1.
Evaluation of Semantic Answer Similarity Metrics ; There are several issues with the existing general machine translation or natural language generation evaluation metrics, and questionanswering QA systems are indifferent in that context. To build robust QA systems, we need the ability to have equivalently robust evaluation systems to verify whether model predictions to questions are similar to groundtruth annotations. The ability to compare similarity based on semantics as opposed to pure string overlap is important to compare models fairly and to indicate more realistic acceptance criteria in reallife applications. We build upon the first to our knowledge paper that uses transformerbased model metrics to assess semantic answer similarity and achieve higher correlations to human judgement in the case of no lexical overlap. We propose crossencoder augmented biencoder and BERTScore models for semantic answer similarity, trained on a new dataset consisting of name pairs of USAmerican public figures. As far as we are concerned, we provide the first dataset of coreferent name string pairs along with their similarities, which can be used for training.
Reduced Optimal Power Flow Using Graph Neural Network ; OPF problems are formulated and solved for power system operations, especially for determining generation dispatch points in realtime. For large and complex power system networks with large numbers of variables and constraints, finding the optimal solution for realtime OPF in a timely manner requires a massive amount of computing power. This paper presents a new method to reduce the number of constraints in the original OPF problem using a graph neural network GNN. GNN is an innovative machine learning model that utilizes features from nodes, edges, and network topology to maximize its performance. In this paper, we proposed a GNN model to predict which lines would be heavily loaded or congested with given load profiles and generation capacities. Only these critical lines will be monitored in an OPF problem, creating a reduced OPF ROPF problem. Significant saving in computing time is expected from the proposed ROPF model. A comprehensive analysis of predictions from the GNN model was also made. It is concluded that the application of GNN for ROPF is able to reduce computing time while retaining solution quality.
Adaptive Multiview Rule Discovery for WeaklySupervised Compatible Products Prediction ; On ecommerce platforms, predicting if two products are compatible with each other is an important functionality to achieve trustworthy product recommendation and search experience for consumers. However, accurately predicting product compatibility is difficult due to the heterogeneous product data and the lack of manually curated training data. We study the problem of discovering effective labeling rules that can enable weaklysupervised product compatibility prediction. We develop AMRule, a multiview rule discovery framework that can 1 adaptively and iteratively discover novel rulers that can complement the current weaklysupervised model to improve compatibility prediction; 2 discover interpretable rules from both structured attribute tables and unstructured product descriptions. AMRule adaptively discovers labeling rules from largeerror instances via a boostingstyle strategy, the highquality rules can remedy the current model's weak spots and refine the model iteratively. For rule discovery from structured product attributes, we generate composable highorder rules from decision trees; and for rule discovery from unstructured product descriptions, we generate promptbased rules from a pretrained language model. Experiments on 4 realworld datasets show that AMRule outperforms the baselines by 5.98 on average and improves rule quality and rule proposal efficiency.
Scalable Simulation of Quantum Measurement Process with Quantum Computers ; Recent development in quantum information sciences and technologies, especially building programmable quantum computers, provide us new opportunities to study fundamental aspects of quantum mechanics. We propose qubit models to emulate the quantum measurement process, in which the quantum information of a qubit is mapped to a collection of qubits acting as the measurement device. One model is motivated by singlephoton detection and the other by spin measurement. Both models are scalable to generate Schrodinger catlike state, and their corresponding quantum circuits are shown explicitly. Largescale simulations could be realized in nearterm quantum computers, while classical computers cannot perform the same task efficiently. Due to the scalability of the models, such simulations can help explore the quantumtoclassical boundary, if exists, in the quantum measurement problem. Besides, our protocol to generate cat states may have important applications in quantum computing and metrology.
TE2Rules Explaining Tree Ensembles using Rules ; Tree Ensemble TE models like Gradient Boosted Trees often provide higher prediction performance compared to single decision trees. However, TE models generally lack transparency and interpretability, as humans have difficulty understanding their decision logic. This paper presents a novel approach to convert a TE trained for a binary classification task, to a rule list RL that closely approximates the TE and is interpretable for a human. This RL can effectively explain the model even on the minority class predicted by the model. Experiments on benchmark datasets demonstrate that, i predictions from the RL generated by TE2Rules have higher fidelity with respect to the original TE compared to stateoftheart methods, ii the runtime of TE2Rules is comparable to that of some other similar baselines and iii the runtime of TE2Rules algorithm can be traded off at the cost of a slightly lower fidelity.
Impact of Lorentz violation models on exoplanets dynamics ; Many exoplanets were detected thanks to the radial velocity method, according to which the motion of a binary system around its center of mass can produce a periodical variation of the Doppler effect of the light emitted by the host star. These variations are influenced by both Newtonian and nonNewtonian perturbations to the dominant inversesquare acceleration; accordingly, exoplanetary systems lend themselves to test theories of gravity alternative to General Relativity. In this paper, we consider the impact of Standard Model Extension a model that can be used to test all possible Lorentz violations on the perturbation of radial velocity, and suggest that suitable exoplanets configurations and improvements in detection techniques may contribute to obtain new constraints on the model parameters.
TimestampSupervised Action Segmentation with Graph Convolutional Networks ; We introduce a novel approach for temporal activity segmentation with timestamp supervision. Our main contribution is a graph convolutional network, which is learned in an endtoend manner to exploit both frame features and connections between neighboring frames to generate dense framewise labels from sparse timestamp labels. The generated dense framewise labels can then be used to train the segmentation model. In addition, we propose a framework for alternating learning of both the segmentation model and the graph convolutional model, which first initializes and then iteratively refines the learned models. Detailed experiments on four public datasets, including 50 Salads, GTEA, Breakfast, and Desktop Assembly, show that our method is superior to the multilayer perceptron baseline, while performing on par with or better than the state of the art in temporal activity segmentation with timestamp supervision.
Stochastic Variational Methods in Generalized Hidden SemiMarkov Models to Characterize Functionality in Random Heteropolymers ; Recent years have seen substantial advances in the development of biofunctional materials using synthetic polymers. The growing problem of elusive sequencefunctionality relations for most biomaterials has driven researchers to seek more effective tools and analysis methods. In this study, statistical models are used to study sequence features of the recently reported random heteropolymers RHP, which transport protons across lipid bilayers selectively and rapidly like natural proton channels. We utilized the probabilistic graphical model framework and developed a generalized hidden semiMarkov model GHSMMRHP to extract the functiondetermining sequence features, including the transmembrane segments within a chain and the sequence heterogeneity among different chains. We developed stochastic variational methods for efficient inference on parameter estimation and predictions, and empirically studied their computational performance from a comparative perspective on Bayesian i.e., stochastic variational Bayes versus frequentist i.e., stochastic variational expectationmaximization frameworks that have been studied separately before. The real data results agree well with the laboratory experiments, and suggest GHSMMRHP's potential in predicting proteinlike behavior at the polymerchain level.
A versatile stochastic dissemination model ; This paper consider a highly general dissemination model that keeps track of the stochastic evolution of the distribution of wealth over a set of agents. There are two types of events i units of wealth externally arrive, and ii units of wealth are redistributed among the agents, while throughout Markov modulation is allowed. We derive a system of coupled differential equations describing the joint transient distribution of the agents' wealth values, which translate into linear differential equations when considering the corresponding means and covariances. While our model uses the economic terminology of wealth being distributed over agents, we illustrate through a series of examples that it can be used considerably more broadly. Indeed, it also facilitates the analysis of the spread of opinions over a population thus generalizing existing opinion dynamics models, and the analysis of the dynamics of a file storage system thus allowing the assessment of the efficacy of storage policies.
Repairing Neural Networks by Leaving the Right Past Behind ; Prediction failures of machine learning models often arise from deficiencies in training data, such as incorrect labels, outliers, and selection biases. However, such data points that are responsible for a given failure mode are generally not known a priori, let alone a mechanism for repairing the failure. This work draws on the Bayesian view of continual learning, and develops a generic framework for both, identifying training examples that have given rise to the target failure, and fixing the model through erasing information about them. This framework naturally allows leveraging recent advances in continual learning to this new problem of model repairment, while subsuming the existing works on influence functions and data deletion as specific instances. Experimentally, the proposed approach outperforms the baselines for both identification of detrimental training data and fixing model failures in a generalisable manner.
Learning Nearglobaloptimal Strategies for Hybrid Nonconvex Model Predictive Control of Single Rigid Body Locomotion ; Convex model predictive controls MPCs with a single rigid body model have demonstrated strong performance on real legged robots. However, convex MPCs are limited by their assumptions such as small rotation angle and predefined gait, limiting the richness of potential solutions. We remove those assumptions and solve the complete mixedinteger nonconvex programming with single rigid body model. We first collect datasets of presolved problems offline, then learn the problemsolution map to solve this optimization fast for MPC. If warmstarts can be found, offline problems can be solved close to the global optimality. The proposed controller is tested by generating various gaits and behaviors depending on the initial conditions. Hardware test demonstrates online gait generation and adaptation running at more than 50 Hz based on sensor feedback.
Large Scale Radio Frequency Signal Classification ; Existing datasets used to train deep learning models for narrowband radio frequency RF signal classification lack enough diversity in signal types and channel impairments to sufficiently assess model performance in the real world. We introduce the Sig53 dataset consisting of 5 million syntheticallygenerated samples from 53 different signal classes and expertly chosen impairments. We also introduce TorchSig, a signals processing machine learning toolkit that can be used to generate this dataset. TorchSig incorporates data handling principles that are common to the vision domain, and it is meant to serve as an opensource foundation for future signals machine learning research. Initial experiments using the Sig53 dataset are conducted using state of the art SoTA convolutional neural networks ConvNets and Transformers. These experiments reveal Transformers outperform ConvNets without the need for additional regularization or a ConvNet teacher, which is contrary to results from the vision domain. Additional experiments demonstrate that TorchSig's domainspecific data augmentations facilitate model training, which ultimately benefits model performance. Finally, TorchSig supports onthefly synthetic data creation at training time, thus enabling massive scale training sessions with virtually unlimited datasets.
Transition1x a Dataset for Building Generalizable Reactive Machine Learning Potentials ; Machine Learning ML models have, in contrast to their usefulness in molecular dynamics studies, had limited success as surrogate potentials for reaction barrier search. It is due to the scarcity of training data in relevant transition state regions of chemical space. Currently, available datasets for training ML models on small molecular systems almost exclusively contain configurations at or near equilibrium. In this work, we present the dataset Transition1x containing 9.6 million Density Functional Theory DFT calculations of forces and energies of molecular configurations on and around reaction pathways at the wB97x631Gd level of theory. The data was generated by running Nudged Elastic Band NEB calculations with DFT on 10k reactions while saving intermediate calculations. We train stateoftheart equivariant graph messagepassing neural network models on Transition1x and crossvalidate on the popular ANI1x and QM9 datasets. We show that ML models cannot learn features in transitionstate regions solely by training on hitherto popular benchmark datasets. Transition1x is a new challenging benchmark that will provide an important step towards developing nextgeneration ML force fields that also work far away from equilibrium configurations and reactive systems.
RetrievalAugmented Transformer for Image Captioning ; Image captioning models aim at connecting Vision and Language by providing natural language descriptions of input images. In the past few years, the task has been tackled by learning parametric models and proposing visual feature extraction advancements or by modeling better multimodal connections. In this paper, we investigate the development of an image captioning approach with a kNN memory, with which knowledge can be retrieved from an external corpus to aid the generation process. Our architecture combines a knowledge retriever based on visual similarities, a differentiable encoder, and a kNNaugmented attention layer to predict tokens based on the past context and on text retrieved from the external memory. Experimental results, conducted on the COCO dataset, demonstrate that employing an explicit external memory can aid the generation process and increase caption quality. Our work opens up new avenues for improving image captioning models at larger scale.
Reducing the Vision and Language Bias for Temporal Sentence Grounding ; Temporal sentence grounding TSG is an important yet challenging task in multimedia information retrieval. Although previous TSG methods have achieved decent performance, they tend to capture the selection biases of frequently appeared videoquery pairs in the dataset rather than present robust multimodal reasoning abilities, especially for the rarely appeared pairs. In this paper, we study the above issue of selection biases and accordingly propose a DebiasingTSG DTSG model to filter and remove the negative biases in both vision and language modalities for enhancing the model generalization ability. Specifically, we propose to alleviate the issue from two perspectives 1 Feature distillation. We built a multimodal debiasing branch to firstly capture the vision and language biases, and then apply a bias identification module to explicitly recognize the true negative biases and remove them from the benign multimodal representations. 2 Contrastive sample generation. We construct two types of negative samples to enforce the model to accurately learn the aligned multimodal semantics and make complete semantic reasoning. We apply the proposed model to both commonly and rarely appeared TSG cases, and demonstrate its effectiveness by achieving the stateoftheart performance on three benchmark datasets ActivityNet Caption, TACoS, and CharadesSTA.
Integrable generalized Heisenberg ferromagnet equations with selfconsistent potentials and related YajimaOikawa type equations ; We consider some nonlinear models describing interactions of long and short LS waves. Such LS models have been derived and proposed with various motivations, which mainly come from fluid and plasma physics. In this paper, we study some of integrable LS models, namely, the YajimaOikawa equation, the Newell equation, the Ma equation, the GengLi equation and etc. In particular, the gauge equivalent counterparts of these integrable LS models equations are found. In fact, these gauge equivalents of the LS equations are integrable generalized Heisenberg ferromagnet equations HFE with selfconsistent potentials HFESCP. The associated Lax representations of these HFESCP are given. We also presented several spinphonon equations which describe nonlinear interactions of spin and lattice subsystems in ferromagnetic materials.
Topological structure of complex predictions ; Complex prediction models such as deep learning are the output from fitting machine learning, neural networks, or AI models to a set of training data. These are now standard tools in science. A key challenge with the current generation of models is that they are highly parameterized, which makes describing and interpreting the prediction strategies difficult. We use topological data analysis to transform these complex prediction models into pictures representing a topological view. The result is a map of the predictions that enables inspection. The methods scale up to large datasets across different domains and enable us to detect labeling errors in training data, understand generalization in image classification, and inspect predictions of likely pathogenic mutations in the BRCA1 gene.
Interactive Evaluation of Dialog Track at DSTC9 ; The ultimate goal of dialog research is to develop systems that can be effectively used in interactive settings by real users. To this end, we introduced the Interactive Evaluation of Dialog Track at the 9th Dialog System Technology Challenge. This track consisted of two subtasks. The first subtask involved building knowledgegrounded response generation models. The second subtask aimed to extend dialog models beyond static datasets by assessing them in an interactive setting with real users. Our track challenges participants to develop strong response generation models and explore strategies that extend them to backandforth interactions with real users. The progression from static corpora to interactive evaluation introduces unique challenges and facilitates a more thorough assessment of opendomain dialog systems. This paper provides an overview of the track, including the methodology and results. Furthermore, it provides insights into how to best evaluate opendomain dialog models
Central Limit Theorem in Disordered MonomerDimer Model ; We consider the disordered monomerdimer model on general finite graphs with bounded degree, where both the edges and the vertices are equipped with i.i.d. random weights coming from two possibly different distributions. Under the finite fourth moment assumption on the weight distributions, we prove a Gaussian central limit theorem for the free energy of the associated Gibbs measure and also provide a rate of convergence in the KolmogorovSmirnov distance. The central limit theorem continues to hold under a nearly optimal finite 2epsilonmoment assumption on the weight distributions if the underlying graphs are further assumed to have a uniformly subexponential volume growth. This generalizes a recent result by Dey and Krishnan arXiv2109.12716 who showed a Gaussian central limit theorem in the disordered monomerdimer model on cylinder graphs. Our proof relies on the idea that the disordered monomerdimer model exhibits a decay of correlation with high probability.
Adaptive Latent Factor Analysis via Generalized MomentumIncorporated Particle Swarm Optimization ; Stochastic gradient descent SGD algorithm is an effective learning strategy to build a latent factor analysis LFA model on a highdimensional and incomplete HDI matrix. A particle swarm optimization PSO algorithm is commonly adopted to make an SGDbased LFA model's hyperparameters, i.e, learning rate and regularization coefficient, selfadaptation. However, a standard PSO algorithm may suffer from accuracy loss caused by premature convergence. To address this issue, this paper incorporates more historical information into each particle's evolutionary process for avoiding premature convergence following the principle of a generalizedmomentum GM method, thereby innovatively achieving a novel GMincorporated PSO GMPSO. With it, a GMPSObased LFA GMPL model is further achieved to implement efficient selfadaptation of hyperparameters. The experimental results on three HDI matrices demonstrate that the GMPL model achieves a higher prediction accuracy for missing data estimation in industrial applications.
Pricing zerocoupon CAT bonds using the enlargement of ltration theory a general framework ; The main goal of this paper is to use the enlargement of ltration framework for pricing zerocoupon CAT bonds. For this purpose, we develop two models where the trigger event time is perfectly covered by an increasing sequence of stopping times with respect to a reference ltration. Hence, depending on the nature of these stopping times the trigger event time can be either accessible or totally inaccessible. When some of these stopping times are not predictable, the trigger event time is totally inaccessible, and very nice mathematical computations can be derived. When the stopping times are predictable, the trigger event time is accessible, and this case would be a meaningful choice for Model 1 from a practical point of view since features like seasonality are already captured by some quantities such as the stochastic intensity of the Poisson process. We compute the main tools for pricing the zerocoupon CAT bond and show that our constructions are more general than some existing models in the literature. We obtain some closedform prices of zerocoupon CAT bonds in Model 2 so we give a numerical illustrative example for this latter.
Rethinking Degradation Radiograph SuperResolution via AIDSRGAN ; In this paper, we present a medical AttentIon Denoising Super Resolution Generative Adversarial Network AIDSRGAN for diographic image superresolution. First, we present a medical practical degradation model that considers various degradation factors beyond downsampling. To the best of our knowledge, this is the first composite degradation model proposed for radiographic images. Furthermore, we propose AIDSRGAN, which can simultaneously denoise and generate highresolution HR radiographs. In this model, we introduce an attention mechanism into the denoising module to make it more robust to complicated degradation. Finally, the SR module reconstructs the HR radiographs using the clean lowresolution LR radiographs. In addition, we propose a separatejoint training approach to train the model, and extensive experiments are conducted to show that the proposed method is superior to its counterparts. e.g., our proposed method achieves 31.90 of PSNR with a scale factor of 4 times, which is 7.05 higher than that obtained by recent work, SPSR 16. Our dataset and code will be made available at httpsgithub.comyongsongHAIDSRGANMICCAI2022.
Gravitational wave interactions in 3 models of dark energy ; We argue that cubic order interactions between two scalar gravitons and one tensor graviton are ubiquitous in models of dark energy where the strong coupling scale is Lambda3. These interactions can potentially provide efficient decay channels for gravitational waves. They can also lead to gradient instabilities of the scalar perturbations in the presence of large amplitude gravitational waves, e.g. those detected by LIGOVirgo. In contrast with models in scalartensor theories, there is an infinite number of higher order interactions in generic Lambda3 models, which make it difficult to predict the fate of these instabilities inferred from cubic order interactions.
TrustAware Control of Automated Vehicles in CarFollowing Interactions with Human Drivers ; Trust is essential for automated vehicles AVs to promote and sustain technology acceptance in humandominated traffic scenarios. However, computational trust dynamic models describing the interactive relationship between the AVs and surrounding human drivers in traffic rarely exist. This paper aims to fill this gap by developing a quantitative trust dynamic model of the human driver in the carfollowing interaction with the AV and incorporating the proposed trust dynamic model into the AV's control design. The human driver's trust level is modeled as a plan evaluation metric that measures the explicability of the AV's plan from the human driver's perspective, and the explicability score of the AV's plan is integrated into the AV's decisionmaking process. With the proposed approach, trustaware AVs generate explicable plans by optimizing both predefined plans and explicability of the plans in the carfollowing interactions with the following human driver. The results collectively demonstrate that the trustaware AV can generate more explicable plans and achieve a higher trust level for the human driver compared to trustunaware AV in humanAV interactions.
Probabilistic Amplitude Shaping and Nonlinearity Tolerance Analysis and Sequence Selection Method ; Probabilistic amplitude shaping PAS is a practical means to achieve a shaping gain in optical fiber communication. However, PAS and shaping in general also affect the signaldependent generation of nonlinear interference. This provides an opportunity for nonlinearity mitigation through PAS, which is also referred to as a nonlinear shaping gain. In this paper, we introduce a linear lowpass filter model that relates transmitted symbolenergy sequences and nonlinear distortion experienced in an optical fiber channel. Based on this model, we conduct a nonlinearity analysis of PAS with respect to shaping blocklength and mapping strategy. Our model explains results and relationships found in literature and can be used as a design tool for PAS with improved nonlinearity tolerance. We use the model to introduce a new metric for PAS with sequence selection. We perform simulations of selectionbased PAS with various amplitude shapers and mapping strategies to demonstrate the effectiveness of the new metric in different optical fiber system scenarios.