text
stringlengths
62
2.94k
Invariant DataDriven Subgrid Stress Modeling on Anisotropic Grids for Large Eddy Simulation ; We present a new approach for constructing datadriven subgrid stress models for large eddy simulation of turbulent flows using anisotropic grids. The key to our approach is a Galilean, rotationally, reflectionally and unit invariant model form that also embeds filter anisotropy in such a way that an important subgrid stress identity is satisfied. We use this model form to train a datadriven subgrid stress model using only a small amount of anisotropically filtered DNS data and a simple and inexpensive neural network architecture. A priori and a posteriori tests indicate that the trained datadriven model generalizes well to filter anisotropy ratios, Reynolds numbers and flow physics outside the training dataset.
FedALA Adaptive Local Aggregation for Personalized Federated Learning ; A key challenge in federated learning FL is the statistical heterogeneity that impairs the generalization of the global model on each client. To address this, we propose a method Federated learning with Adaptive Local Aggregation FedALA by capturing the desired information in the global model for client models in personalized FL. The key component of FedALA is an Adaptive Local Aggregation ALA module, which can adaptively aggregate the downloaded global model and local model towards the local objective on each client to initialize the local model before training in each iteration. To evaluate the effectiveness of FedALA, we conduct extensive experiments with five benchmark datasets in computer vision and natural language processing domains. FedALA outperforms eleven stateoftheart baselines by up to 3.27 in test accuracy. Furthermore, we also apply ALA module to other federated learning methods and achieve up to 24.19 improvement in test accuracy.
CLIP Train Faster with Less Data ; Deep learning models require an enormous amount of data for training. However, recently there is a shift in machine learning from modelcentric to datacentric approaches. In datacentric approaches, the focus is to refine and improve the quality of the data to improve the learning performance of the models rather than redesigning model architectures. In this paper, we propose CLIP i.e., Curriculum Learning with Iterative data Pruning. CLIP combines two datacentric approaches i.e., curriculum learning and dataset pruning to improve the model learning accuracy and convergence speed. The proposed scheme applies lossaware dataset pruning to iteratively remove the least significant samples and progressively reduces the size of the effective dataset in the curriculum learning training. Extensive experiments performed on crowd density estimation models validate the notion behind combining the two approaches by reducing the convergence time and improving generalization. To our knowledge, the idea of data pruning as an embedded process in curriculum learning is novel.
Language Models as Agent Models ; Language models LMs are trained on collections of documents, written by individual human agents to achieve specific goals in an outside world. During training, LMs have access only to text of these documents, with no direct evidence of the internal states of the agents that produced them a fact often used to argue that LMs are incapable of modeling goaldirected aspects of human language production and comprehension. Can LMs trained on text learn anything at all about the relationship between language and use I argue that LMs are models of intentional communication in a specific, narrow sense. When performing next word prediction given a textual context, an LM can infer and represent properties of an agent likely to have produced that context. These representations can in turn influence subsequent LM generation in the same way that agents' communicative intentions influence their language. I survey findings from the recent literature showing that even in today's nonrobust and errorprone models LMs infer and use representations of finegrained communicative intentions and more abstract beliefs and goals. Despite the limited nature of their training data, they can thus serve as building blocks for systems that communicate and act intentionally.
INCLUSIFY A benchmark and a model for genderinclusive German ; Genderinclusive language is important for achieving gender equality in languages with gender inflections, such as German. While stirring some controversy, it is increasingly adopted by companies and political institutions. A handful of tools have been developed to help people use genderinclusive language by identifying instances of the generic masculine and providing suggestions for more inclusive reformulations. In this report, we define the underlying tasks in terms of natural language processing, and present a dataset and measures for benchmarking them. We also present a model that implements these tasks, by combining an inclusive language database with an elaborate sequence of processing steps via standard pretrained models. Our model achieves a recall of 0.89 and a precision of 0.82 in our benchmark for identifying exclusive language; and one of its top five suggestions is chosen in realworld texts in 44 of cases. We sketch how the area could be further advanced by training endtoend models and using large language models; and we urge the community to include more genderinclusive texts in their training data in order to not present an obstacle to the adoption of genderinclusive language. Through these efforts, we hope to contribute to restoring justice in language and, to a small extent, in reality.
Image Inpainting via Iteratively Decoupled Probabilistic Modeling ; Generative adversarial networks GANs have made great success in image inpainting yet still have difficulties tackling large missing regions. In contrast, iterative probabilistic algorithms, such as autoregressive and denoising diffusion models, have to be deployed with massive computing resources for decent effect. To achieve highquality results with low computational cost, we present a novel pixel spread model PSM that iteratively employs decoupled probabilistic modeling, combining the optimization efficiency of GANs with the prediction tractability of probabilistic models. As a result, our model selectively spreads informative pixels throughout the image in a few iterations, largely enhancing the completion quality and efficiency. On multiple benchmarks, we achieve new stateoftheart performance. Code is released at httpsgithub.comfenglinglwbPSM.
A meshfree particle method for continuum modeling of granular flow ; Based on the continuum model for granular media developed in Dunatunga et al. we propose a meshfree generalized finite difference method for the simulation of granular flows. The model is given by an elastoviscoplastic model with a yield criterion using the muI rheology from Jop et al. The numerical procedure is based on a meshfree particle method with a least squares approximation of the derivatives in the balance equations combined with the numerical algorithm developed in Dunatunga et al. to compute the plastic stresses. The method is numerically tested and verified for several numerical experiments including granular column collapse and rigid body motion in granular materials. For comparison a nonlinear microscopic model from Lacaze et al. is implemented and results are compared to the those obtained from the continuum model for granular column collapse and rigid body coupling to granular flow.
Diffusion Art or Digital Forgery Investigating Data Replication in Diffusion Models ; Cuttingedge diffusion models produce images with high quality and customizability, enabling them to be used for commercial art and graphic design purposes. But do diffusion models create unique works of art, or are they replicating content directly from their training sets In this work, we study image retrieval frameworks that enable us to compare generated images with training samples and detect when content has been replicated. Applying our frameworks to diffusion models trained on multiple datasets including Oxford flowers, CelebA, ImageNet, and LAION, we discuss how factors such as training set size impact rates of content replication. We also identify cases where diffusion models, including the popular Stable Diffusion model, blatantly copy from their training data.
Fresnel Microfacet BRDF Unification of PolariRadiometric SurfaceBody Reflection ; Computer vision applications have heavily relied on the linear combination of Lambertian diffuse and microfacet specular reflection models for representing reflected radiance, which turns out to be physically incompatible and limited in applicability. In this paper, we derive a novel analytical reflectance model, which we refer to as Fresnel Microfacet BRDF model, that is physically accurate and generalizes to various realworld surfaces. Our key idea is to model the Fresnel reflection and transmission of the surface microgeometry with a collection of oriented mirror facets, both for body and surface reflections. We carefully derive the Fresnel reflection and transmission for each microfacet as well as the light transport between them in the subsurface. This physicallygrounded modeling also allows us to express the polarimetric behavior of reflected light in addition to its radiometric behavior. That is, FMBRDF unifies not only body and surface reflections but also light reflection in radiometry and polarization and represents them in a single model. Experimental results demonstrate its effectiveness in accuracy, expressive power, and imagebased estimation.
Reliability Study of Battery Lives A Functional Degradation Analysis Approach ; Renewable energy is critical for combating climate change, whose first step is the storage of electricity generated from renewable energy sources. Liion batteries are a popular kind of storage units. Their continuous usage through chargedischarge cycles eventually leads to degradation. This can be visualized in plotting voltage discharge curves VDCs over discharge cycles. Studies of battery degradation have mostly concentrated on modeling degradation through one scalar measurement summarizing each VDC. Such simplification of curves can lead to inaccurate predictive models. Here we analyze the degradation of rechargeable Liion batteries from a NASA data set through modeling and predicting their full VDCs. With techniques from longitudinal and functional data analysis, we propose a new twostep predictive modeling procedure for functional responses residing on heterogeneous domains. We first predict the shapes and domain end points of VDCs using functional regression models. Then we integrate these predictions to perform a degradation analysis. Our approach is fully functional, allows the incorporation of usage information, produces predictions in a curve form, and thus provides flexibility in the assessment of battery degradation. Through extensive simulation studies and crossvalidated data analysis, our approach demonstrates better prediction than the existing approach of modeling degradation directly with aggregated data.
Forecasting formation of a Tropical Cyclone Using Reanalysis Data ; The tropical cyclone formation process is one of the most complex natural phenomena which is governed by various atmospheric, oceanographic, and geographic factors that varies with time and space. Despite several years of research, accurately predicting tropical cyclone formation remains a challenging task. While the existing numerical models have inherent limitations, the machine learning models fail to capture the spatial and temporal dimensions of the causal factors behind TC formation. In this study, a deep learning model has been proposed that can forecast the formation of a tropical cyclone with a lead time of up to 60 hours with high accuracy. The model uses the highresolution reanalysis data ERA5 ECMWF reanalysis 5th generation, and best track data IBTrACS International Best Track Archive for Climate Stewardship to forecast tropical cyclone formation in six ocean basins of the world. For 60 hours lead time the models achieve an accuracy in the range of 86.9 92.9 across the six ocean basins. The model takes about 515 minutes of training time depending on the ocean basin, and the amount of data used and can predict within seconds, thereby making it suitable for reallife usage.
Stochastic Modeling of Biofilm Formation with Bacterial Quorum Sensing ; Bacteria generally live in complicated structures called biofilms, consisting of communicating bacterial colonies and extracellular polymeric substance EPS. Since biofilms are related to detrimental effects such as infection or antibiotic resistance in different settings, it is essential to model their formation. In this paper, a stochastic model is proposed for biofilm formation, using bacterial quorum sensing QS. In this model, the biological processes in the biofilm formation are modeled as a chemical reaction network which includes bacterial reproduction, productions of autoinducer and EPS, and their diffusion. The modified explicit tauleap simulation algorithm is adapted based on the twostate QS mechanism. Our approach is validated by using the experimental results of textitPseudomonas putida IsoF bacteria for autoinducer and bacteria concentration. It is also shown that the percentage of EPS in the biofilm increases significantly after the state change in QS, while it decreases before QS is activated. The presented work shows how the biofilm growth can be modeled realistically by using the QS mechanism in stochastic simulations of chemical reactions.
Considerations for Differentially Private Learning with LargeScale Public Pretraining ; The performance of differentially private machine learning can be boosted significantly by leveraging the transfer learning capabilities of nonprivate models pretrained on large public datasets. We critically review this approach. We primarily question whether the use of large Webscraped datasets should be viewed as differentialprivacypreserving. We caution that publicizing these models pretrained on Web data as private could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy. Beyond the privacy considerations of using public data, we further question the utility of this paradigm. We scrutinize whether existing machine learning benchmarks are appropriate for measuring the ability of pretrained models to generalize to sensitive domains, which may be poorly represented in public Web data. Finally, we notice that pretraining has been especially impactful for the largest available models models sufficiently large to prohibit end users running them on their own devices. Thus, deploying such models today could be a net loss for privacy, as it would require private data to be outsourced to a more computepowerful third party. We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
Landscape approximation of the ground state eigenvalue for graphs and random hopping models ; We consider the localization landscape function u and ground state eigenvalue lambda for operators on graphs. We first show that the maximum of the landscape function is comparable to the reciprocal of the ground state eigenvalue if the operator satisfies certain semigroup kernel upper bounds. This implies general upper and lower bounds on the landscape product lambdauinfty for several models, including the Anderson model and random hopping bonddisordered models, on graphs that are roughly isometric to mathbbZd, as well as on some fractallike graphs such as the Sierpinski gasket graph. Next, we specialize to a random hopping model on mathbbZ, and show that as the size of the chain grows, the landscape product lambdauinfty approaches pi28 for Bernoulli offdiagonal disorder, and has the same upper bound of pi28 for Uniform0,1 offdiagonal disorder. We also numerically study the random hopping model when the band width hopping distance is greater than one, and provide strong numerical evidence that a similar approximation holds for lowlying energies in the spectrum.
Are Multimodal Models Robust to Image and Text Perturbations ; Multimodal imagetext models have shown remarkable performance in the past few years. However, evaluating their robustness against distribution shifts is crucial before adopting them in realworld applications. In this paper, we investigate the robustness of 9 popular opensourced imagetext models under common perturbations on five tasks imagetext retrieval, visual reasoning, visual entailment, image captioning, and texttoimage generation. In particular, we propose several new multimodal robustness benchmarks by applying 17 image perturbation and 16 text perturbation techniques on top of existing datasets. We observe that multimodal models are not robust to image and text perturbations, especially to image perturbations. Among the tested perturbation methods, characterlevel perturbations constitute the most severe distribution shift for text, and zoom blur is the most severe shift for image data. We also introduce two new robustness metrics MMI and MOR for proper evaluations of multimodal models. We hope our extensive study sheds light on new directions for the development of robust multimodal models.
Efficient Long Sequence Modeling via State Space Augmented Transformer ; Transformer models have achieved superior performance in various natural language processing tasks. However, the quadratic computational cost of the attention mechanism limits its practicality for long sequences. There are existing attention variants that improve the computational efficiency, but they have limited ability to effectively compute global information. In parallel to Transformer models, state space models SSMs are tailored for long sequences, but they are not flexible enough to capture complicated local information. We propose SPADE, short for underlinetextbfState sunderlinetextbfPace underlinetextbfAugmenteunderlinetextbfD TransformunderlinetextbfEr. Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers. The SSM augments global information, which complements the lack of longrange dependency issue in local attention methods. Experimental results on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method. To further demonstrate the scalability of SPADE, we pretrain large encoderdecoder models and present finetuning results on natural language understanding and natural language generation tasks.
Werewolf Among Us A Multimodal Dataset for Modeling Persuasion Behaviors in Social Deduction Games ; Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpus. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multiplayer social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset, code, and models can be found at httpspersuasiondeductiongame.socialaidata.org.
Emergent Analogical Reasoning in Large Language Models ; The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zeroshot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here, we performed a direct comparison between human reasoners and a large language model the textdavinci003 variant of GPT3 on a range of analogical tasks, including a nonvisual matrix reasoning task based on the rule structure of Raven's Standard Progressive Matrices. We found that GPT3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT4 indicated even better performance. Our results indicate that large language models such as GPT3 have acquired an emergent ability to find zeroshot solutions to a broad range of analogy problems.
Multilingual SequencetoSequence Models for Hebrew NLP ; Recent work attributes progress in NLP to large language models LMs with increased model size and large quantities of pretraining data. Despite this, current stateoftheart LMs for Hebrew are both underparameterized and undertrained compared to LMs in other languages. Additionally, previous work on pretrained Hebrew LMs focused on encoderonly models. While the encoderonly architecture is beneficial for classification tasks, it does not cater well for subword prediction tasks, such as Named Entity Recognition, when considering the morphologically rich nature of Hebrew. In this paper we argue that sequencetosequence generative architectures are more suitable for LLMs in the case of morphologically rich languages MRLs such as Hebrew. We demonstrate that by casting tasks in the Hebrew NLP pipeline as texttotext tasks, we can leverage powerful multilingual, pretrained sequencetosequence models as mT5, eliminating the need for a specialized, morphemebased, separately finetuned decoder. Using this approach, our experiments show substantial improvements over previously published results on existing Hebrew NLP benchmarks. These results suggest that multilingual sequencetosequence models present a promising building block for NLP for MRLs.
Unnatural Instructions Tuning Language Models with Almost No Human Labor ; Instruction tuning enables pretrained language models to perform new tasks from inferencetime natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions. In this work, we introduce Unnatural Instructions a large dataset of creative and diverse instructions, collected with virtually no human labor. We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth. This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs. Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on opensource manuallycurated datasets, surpassing the performance of models such as T0 and TkInstruct across various benchmarks. These results demonstrate the potential of modelgenerated data as a costeffective alternative to crowdsourcing for dataset expansion and diversification.
Derivation and Extensions of the TollesLawson Model for Aeromagnetic Compensation ; This note is intended to serve as a straightforward reference that summarizes and expands on the linear aeromagnetic compensation model first introduced by Tolles and Lawson in 1950. The TollesLawson model provides a simple, physical representation of an aircraft's magnetic field, composed of permanent, induced, and eddy current terms, and applies an approximation a Taylor expansion to enable fitting coefficients with a general linear model. Here, the TollesLawson model is derived, paying stricter attention to where assumptions are made, the model calibration procedure is described, and some additional comments on a secondorder correction and a means of constructing the vector aircraft field are provided.
Recognition and reconstruction of cell differentiation patterns with deep learning ; Cell lineage decisions occur in threedimensional spatial patterns that are difficult to identify by eye. There is an ongoing effort to replicate such patterns using mathematical modeling. One approach uses long ranging cellcell communication to replicate common spatial arrangements like checkerboard and engulfing patterns. In this model, the cellcell communication has been implemented as a signal that disperses throughout the tissue. On the other hand, machine learning models have been developed for pattern recognition and pattern reconstruction tasks. We combined synthetic data generated by the mathematical model with deep learning algorithms to recognize and reconstruct spatial cell fate patterns in organoids of mouse embryonic stem cells. A graph neural network was developed and trained on synthetic data from the model. Application to in vitro data predicted a low signal dispersion value. To test this result, we implemented a multilayer perceptron for the prediction of a given cell fate based on the fates of the neighboring cells. The results show a 70 accuracy of cell fate reconstruction based on the nine nearest neighbors of a cell. Overall, our approach combines deep learning with mathematical modeling to link cell fate patterns with potential underlying mechanisms.
A Differential Approach for Data and Classification Service based PrivacyPreserving Machine Learning Model in Cloud Environment ; The massive upsurge in computational and storage has driven the local data and machine learning applications to the cloud environment. The owners may not fully trust the cloud environment as it is managed by third parties. However, maintaining privacy while sharing data and the classifier with several stakeholders is a critical challenge. This paper proposes a novel model based on differential privacy and machine learning approaches that enable multiple owners to share their data for utilization and the classifier to render classification services for users in the cloud environment. To process owners data and classifier, the model specifies a communication protocol among various untrustworthy parties. The proposed model also provides a robust mechanism to preserve the privacy of data and the classifier. The experiments are conducted for a Naive Bayes classifier over numerous datasets to compute the proposed model efficiency. The achieved results demonstrate that the proposed model has high accuracy, precision, recall, and F1score up to 94, 95, 94, and 94, and improvement up to 16.95, 20.16, 16.95, and 23.33, respectively, compared with stateoftheart works.
emIAM v1.0 an emulator for Integrated Assessment Models using marginal abatement cost curves ; We developed an emulator for Integrated Assessment Models emIAM based on a marginal abatement cost MAC curve approach. Using the output of IAMs in the ENGAGE Scenario Explorer and the GET model, we derived a large set of MAC curves ten IAMs; global and eleven regions; three gases CO2, CH4, and N2O; eight portfolios of available mitigation technologies; and two emission sources. We tested the performance of emIAM by coupling it with a simple climate model ACC2. We found that the optimizing climateeconomy model emIAMACC2 adequately reproduced a majority of original IAM emission outcomes under similar conditions, allowing systematic explorations of IAMs with small computational resources. emIAM can expand the capability of simple climate models as a tool to calculate costeffective pathways linked directly to a temperature target.
A constrained cosmological model in fR,Lm gravity ; In this article, we study the expanding nature of universe in the contest of fR,Lm gravity theory, here R represents the Ricci scalar and Lm is the matter Lagrangian density. With a specific form of fR,Lm , we obtain the field equations for flat FLRW metric. We parametrize the deceleration parameter in terms of the Hubble parameter and from here we find four free parameters, which are constraints and estimated by using Hz, Pantheon, and their joint data sets. Further, we investigate the evolution of the deceleration parameter which depicts a transition from the deceleration to acceleration phases of the universe. The evolution behaviour of energy density, pressure, and EoS parameters shows that the present model is an accelerated quintessence dark energy model. To compare our model with the Lambda CDM model we use some of the diagnostic techniques. Thus, we find that our model in fR,Lm gravity supports the recent standard observational studies and delineates the latetime cosmic acceleration.
Exact solution of the positiondependent mass Schrodinger equation with the completely positive oscillatorshaped quantum well potential ; Two exactlysolvable confined models of the completely positive oscillatorshaped quantum well are proposed. Exact solutions of the positiondependent mass Schrodinger equation corresponding to the proposed quantum well potentials are presented. It is shown that the discrete energy spectrum expressions of both models depend on certain positive confinement parameters. The spectrum exhibits positive equidistant behavior for the model confined only with one infinitely high wall and nonequidistant behavior for the model confined with the infinitely high wall from both sides. Wavefunctions of the stationary states of the models under construction are expressed through the Laguerre and Jacobi polynomials. In general, the Jacobi polynomials appearing in wavefunctions depend on parameters a and b, but the Laguerre polynomials depend only on the parameter a. Some limits and special cases of the constructed models are discussed.
Lossy Micromaser Battery Almost Pure States in the JaynesCummings Regime ; We consider a micromaser model of a quantum battery, where the battery is a single mode of the electromagnetic field in a cavity, charged via repeated interactions with a stream of qubits, all prepared in the same nonequilibrium state, either incoherent or coherent, with the matterfield interaction modeled by the JaynesCummings model. We show that the coherent protocol is superior to the incoherent one, in that an effective pure steady state is achieved for generic values of the model parameters. Finally, we supplement the above collision model with cavity losses, described by a Lindblad master equation. We show that battery performances, in terms of stored energy, charging power, and steadystate purity, are slightly degraded up to moderated dissipation rate. Our results show that micromasers are robust and reliable quantum batteries, thus making them a promising model for experimental implementations.
Bayesian Weapon System Reliability Modeling with CoxWeibull Neural Network ; We propose to integrate weapon system features such as weapon system manufacturer, deployment time and location, storage time and location, etc. into a parameterized CoxWeibull 1 reliability model via a neural network, like DeepSurv 2, to improve predictive maintenance. In parallel, we develop an alternative Bayesian model by parameterizing the Weibull parameters with a neural network and employing dropout methods such as MonteCarlo MCdropout for comparative purposes. Due to data collection procedures in weapon system testing we employ a novel intervalcensored loglikelihood which incorporates MonteCarlo Markov Chain MCMC 3 sampling of the Weibull parameters during gradient descent optimization. We compare classification metrics such as receiver operator curve ROC area under the curve AUC, precisionrecall PR AUC, and F scores to show our model generally outperforms traditional powerful models such as XGBoost and the current standard conditional Weibull probability density estimation model.
Development of a novel nonlinear dynamic cavitation model and its numerical validations ; Aiming at modeling the cavitation bubble cluster, we propose a novel nonlinear dynamic cavitation model NDCM considering the second derivative term in RayleighPlesset equation through strict mathematical derivation. There are two improvements of the new model i the empirical coefficients are eliminated by introduction of the nonuniform potential functions of psiv and psic for growth and collapse processes respectively, and ii only two model parameters are required, which both base on physical quantities the Blake critical radius Rb and the average maximum growth radius Rm. The corresponding cavitation solver was developed by using OpenFOAM in which we implemented the modified momentum interpolation MMI method to ensure that the calculated results are independent of time step size. Three validation cases, namely numerical bubble cluster collapse, ultrasonic horn experiment, and hydrodynamic cavitation around slender body are employed. The results indicate that psiv and psic can reveal the nonlinear characteristics for cavity accurately, and Rb and Rm can reflect the relevance between cavitation model and actual physical quantities. Moreover, it is discussed the potentiality of NDCM that is generally applied on the cavitating flow possessing with dispersed bubbly cloud.
Modeling of FourWinged Micro Ornithopters Inspired by Dragonflies ; In this paper, we present a full dynamical model of a fourwinged micro ornithopter inspired by a dragonflytype insect. The micro ornithopter is modeled as four articulated rigid body components wings connected to the main body via spherical joints. The dynamical model is derived using Lagrangian mechanics with intrinsic global coordinates, without relying on the common assumptions that neglect the wingsbody interactions. Furthermore, the aerodynamic forces are modeled under the quasisteady motion assumption without restricting the flapping frequency to be relatively high. This provides a full and elegant fourwinged micro ornithopter model that captures the interaction between the body and the wings while avoiding the complexities and singularities associated with other coordinate representations e.g., Euler angles. Simulation studies of the inertial effects of the relative motion between the different parts of the multibody system show the importance of considering the forces and torques, resulting from the wingsbody interaction, in motion generation of these insects.
A review of clustering models in educational data science towards fairnessaware learning ; Ensuring fairness is essential for every education system. Machine learning is increasingly supporting the education system and educational data science EDS domain, from decision support to educational activities and learning analytics. However, the machine learningbased decisions can be biased because the algorithms may generate the results based on students' protected attributes such as race or gender. Clustering is an important machine learning technique to explore student data in order to support the decisionmaker, as well as support educational activities, such as group assignments. Therefore, ensuring highquality clustering models along with satisfying fairness constraints are important requirements. This chapter comprehensively surveys clustering models and their fairness in EDS. We especially focus on investigating the fair clustering models applied in educational activities. These models are believed to be practical tools for analyzing students' data and ensuring fairness in EDS.
Aether scalar tensor theory confronted with weak lensing data at small accelerations ; The recently proposed aether scalar tensor AeST model reproduces both the successes of particle dark matter on cosmological scales and those of modified Newtonian dynamics MOND on galactic scales. But the AeST model reproduces MOND only up to a certain maximum galactocentric radius. Since MOND is known to fit very well to observations at these scales, this raises the question of whether the AeST model comes into tension with data. We tested whether or not the AeST model is in conflict with observations using a recent analysis of data for weak gravitational lensing. We solved the equations of motion of the AeST model, analyzed the solutions' behavior, and compared the results to observational data. The AeST model shows some deviations from MOND at the radii probed by weak gravitational lensing. The data show no clear indication of these predicted deviations.
ProcaHiggs balls and stars in a UV completion for Proca selfinteractions ; We consider a ProcaHiggs model wherein a complex vector field gains mass via spontaneous symmetry breaking, by coupling to a real scalar field with a Higgstype potential. This vector version of the scalar FriedbergLeeSirlin model, can be considered as a UV completion of a complex Proca model with selfinteractions. We study the flat spacetime and selfgravitating solitons of the model, that we dub ProcaHiggs textitballs and textitstars respectively, exploring the domain of solutions and describing some of their mathematical and physical properties. The stars reduce to the wellknown miniProca stars in some limits. The full model evades the hyperbolicity problems of the selfinteracting Proca models, offering novel possibilities for dynamical studies beyond miniProca stars.
KAER A Knowledge Augmented PreTrained Language Model for Entity Resolution ; Entity resolution has been an essential and wellstudied task in data cleaning research for decades. Existing work has discussed the feasibility of utilizing pretrained language models to perform entity resolution and achieved promising results. However, few works have discussed injecting domain knowledge to improve the performance of pretrained language models on entity resolution tasks. In this study, we propose Knowledge Augmented Entity Resolution KAER, a novel framework named for augmenting pretrained language models with external knowledge for entity resolution. We discuss the results of utilizing different knowledge augmentation and prompting methods to improve entity resolution performance. Our model improves on Ditto, the existing stateoftheart entity resolution method. In particular, 1 KAER performs more robustly and achieves better results on dirty data, and 2 with more general knowledge injection, KAER outperforms the existing baseline models on the textual dataset and dataset from the online product domain. 3 KAER achieves competitive results on highly domainspecific datasets, such as citation datasets, requiring the injection of expert knowledge in future work.
Existence of solutions to a class of onedimensional models for pedestrian evacuations ; In the framework inspired by R. L. Hughes model Transp. Res. B, 2002 for pedestrian evacuation in a corridor, we establish existence of a solution by a topological fixed point argument. This argument applies to a class of models where the dynamics of the pedestrian density rho governed by a discontinuousflux Lighthill,Whitham and Richards model rhot signx xitrhovrho x 0 is coupled via an abstract operator to the computation of a Lipschitz continuous turning curve xi. We illustrate this construction by several examples, including the standard Hughes' model with affine cost, and either with openend conditions or with conditions corresponding to panic behaviour with capacity drop at exits. Other examples put forward versions of the Hughes model with inertial dynamics of the turning curve and general costs.
Infusing Commonsense World Models with Graph Knowledge ; While language models have become more capable of producing compelling language, we find there are still gaps in maintaining consistency, especially when describing events in a dynamically changing world. We study the setting of generating narratives in an open world text adventure game, where a graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions. We build a large set of tasks by combining crowdsourced and simulated gameplays with a novel dataset of complex actions in order to to construct such models. We find it is possible to improve the consistency of action narration models by training on graph contexts and targets, even if graphs are not present at test time. This is shown both in automatic metrics and human evaluations. We plan to release our code, the new set of tasks, and best performing models.
Learning Customized Visual Models with RetrievalAugmented Knowledge ; Imagetext contrastive learning models such as CLIP have demonstrated strong task transfer ability. The high generality and usability of these visual models is achieved via a webscale data collection process to ensure broad concept coverage, followed by expensive pretraining to feed all the knowledge into model weights. Alternatively, we propose REACT, REtrievalAugmented CusTomization, a framework to acquire the relevant web knowledge to build customized visual models for target domains. We retrieve the most relevant imagetext pairs 3 of CLIP pretraining data from the webscale database as external knowledge, and propose to customize the model by only training new modualized blocks while freezing all the original weights. The effectiveness of REACT is demonstrated via extensive experiments on classification, retrieval, detection and segmentation tasks, including zero, few, and fullshot settings. Particularly, on the zeroshot classification task, compared with CLIP, it achieves up to 5.4 improvement on ImageNet and 3.7 on the ELEVATER benchmark 20 datasets.
Syntaxguided Neural Module Distillation to Probe Compositionality in Sentence Embeddings ; Past work probing compositionality in sentence embedding models faces issues determining the causal impact of implicit syntax representations. Given a sentence, we construct a neural module net based on its syntax parse and train it endtoend to approximate the sentence's embedding generated by a transformer model. The distillability of a transformer to a Syntactic NeurAl Module Net SynNaMoN then captures whether syntax is a strong causal model of its compositional ability. Furthermore, we address questions about the geometry of semantic composition by specifying individual SynNaMoN modules' internal architecture linearity. We find differences in the distillability of various sentence embedding models that broadly correlate with their performance, but observe that distillability doesn't considerably vary by model size. We also present preliminary evidence that much syntaxguided composition in sentence embedding models is linear, and that nonlinearities may serve primarily to handle noncompositional phrases.
Materialbased analysis of spinorbital Mott insulators ; We present a framework for analyzing Mott insulators using a materialbased tightbinding model. We start with a realistic multiorbital Hubbard model and derive an effective model for the localized electrons through the secondorder perturbation theory with respect to intersite hopping. This effective model, known as the KugelKhomskii model, is described by SUN generators, where N is the number of localized states. We solve this model by the meanfield theory that takes local correlations into account and reveal spinorbital ordered states. To include spatial correlations, we apply the classical Monte Carlo based on the pathintegral approach with SUN coherent states, and also derive the equation of motion for spinorbital degrees of freedom. Our approach is applicable to any Mott insulator with reasonable computational cost. The 5dpyrochlore oxide is used here as demonstration.
A Raytracing and Deep Learning Fusion Superresolution Modeling Method for Wireless Mobile Channel ; Mobile channel modeling has always been the core part for design, deployment and optimization of communication system, especially in 5G and beyond era. Deterministic channel modeling could precisely achieve mobile channel description, however with defects of equipment and time consuming. In this paper, we proposed a novel super resolution SR model for cluster characteristics prediction. The model is based on deep neural networks with residual connection. A series of simulations at 3.5 GHz are conducted by a threedimensional ray tracing RT simulator in diverse scenarios. Cluster characteristics are extracted and corresponding data sets are constructed to train the model. Experiments demonstrate that the proposed SR approach could achieve better power and cluster location prediction performance than traditional interpolation method and the root mean square error RMSE drops by 51 and 78 relatively. Channel impulse response CIR is reconstructed based on cluster characteristics, which could match well with the multipath component MPC. The proposed method can be used to efficiently and accurately generate big data of mobile channel, which significantly reduces the computation time of RTonly.
Exact linear reductions of dynamical models ; Dynamical models described by ordinary differential equations ODEs are a fundamental tool in the sciences and engineering. Exact reduction aims at producing a lowerdimensional model in which each macrovariable can be directly related to the original variables, and it is thus a natural step towards the model's formal analysis and mechanistic understanding. We present an algorithm which, given a polynomial ODE model, computes a longest possible chain of exact linear reductions of the model such that each reduction refines the previous one, thus giving a user control of the level of detail preserved by the reduction. This significantly generalizes over the existing approaches which compute only the reduction of the lowest dimension subject to an approachspecific constraint. The algorithm reduces finding exact linear reductions to a question about representations of finitedimensional algebras. We provide an implementation of the algorithm, demonstrate its performance on a set of benchmarks, and illustrate the applicability via case studies. Our implementation is freely available at httpsgithub.comx3042ExactODEReduction.jl
Understanding the Effectiveness of Very Large Language Models on Dialog Evaluation ; Language models have steadily increased in size over the past few years. They achieve a high level of performance on various natural language processing NLP tasks such as question answering and summarization. Large language models LLMs have been used for generation and can now output humanlike text. Due to this, there are other downstream tasks in the realm of dialog that can now harness the LLMs' language understanding capabilities. Dialog evaluation is one task that this paper will explore. It concentrates on prompting with LLMs BLOOM, OPT, GPT3, FlanT5, InstructDial and TNLGv2. The paper shows that the choice of datasets used for training a model contributes to how well it performs on a task as well as on how the prompt should be structured. Specifically, the more diverse and relevant the group of datasets that a model is trained on, the better dialog evaluation performs. This paper also investigates how the number of examples in the prompt and the type of example selection used affect the model's performance.
On Pretrained Language Models for Antibody ; Antibodies are vital proteins offering robust protection for the human body from pathogens. The development of general protein and antibodyspecific pretrained language models both facilitate antibody prediction tasks. However, there have been limited studies that comprehensively explore the representation capability of distinct pretrained language models on different antibody tasks. To investigate the problem, we aim to answer several key questions in this paper, such as how pretrained language models perform in antibody tasks with different specificity and how introducing specific biological mechanisms to the pretraining process can benefit the model. Additionally, we evaluate if the learned antibody pretrained representations can be applied to realworld antibody problems, like drug discovery and immune process understanding. Previously, no benchmark available largely hindered the study to answer these questions. To aid in our investigation, we provide an AnTibody Understanding Evaluation ATUE benchmark. We comprehensively evaluate the performance of protein pretrained language models by empirical study along with conclusions and new insights. Our ATUE and code are released at httpsgithub.comdqwang122EATLM.
Study of human red blood cell geometry using digital holographic microscopy and mathematical models for cell shape in biomedical imaging ; Shape of red blood cells is a critical factor in their characterization, and from this point of view, their geometrical modeling becomes essential. The suitability of three frequently used analytical models for modeling the geometrical shape and size of the human red blood cells, in light scattering experiments and, computer simulation studies of biophysical properties of the cell membrane, is assessed. The 2D and 3D thickness profiles of healthy RBCs have been generated from the parametric equations of these models and compared to the experimentally obtained thickness profiles using digital holographic microscopy. The study reveals that the models considering the biomechanical properties of cell membranes provide a better description of the biconcave discoid shape of the RBCs. Statistical distributions and descriptive statistics of the geometrical parameters of the RBCs suggest that the evaluation of these parameters alone is insufficient for identifying cells of specific shape, which is crucial for diagnosis using biomedical imaging techniques.
DiffSTG Probabilistic SpatioTemporal Graph Forecasting with Denoising Diffusion Models ; Spatiotemporal graph neural networks STGNN have emerged as the dominant model for spatiotemporal graph STG forecasting. Despite their success, they fail to model intrinsic uncertainties within STG data, which cripples their practicality in downstream tasks for decisionmaking. To this end, this paper focuses on probabilistic STG forecasting, which is challenging due to the difficulty in modeling uncertainties and complex ST dependencies. In this study, we present the first attempt to generalize the popular denoising diffusion probabilistic models to STGs, leading to a novel nonautoregressive framework called DiffSTG, along with the first denoising network UGnet for STG in the framework. Our approach combines the spatiotemporal learning capabilities of STGNNs with the uncertainty measurements of diffusion models. Extensive experiments validate that DiffSTG reduces the Continuous Ranked Probability Score CRPS by 414, and Root Mean Squared Error RMSE by 27 over existing methods on three realworld datasets.
TransFool An Adversarial Attack against Neural Machine Translation Models ; Deep neural networks have been shown to be vulnerable to small perturbations of their inputs, known as adversarial attacks. In this paper, we investigate the vulnerability of Neural Machine Translation NMT models to adversarial attacks and propose a new attack algorithm called TransFool. To fool NMT models, TransFool builds on a multiterm optimization problem and a gradient projection step. By integrating the embedding representation of a language model, we generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples. Experimental results demonstrate that, for different translation tasks and NMT architectures, our whitebox attack can severely degrade the translation quality while the semantic similarity between the original and the adversarial sentences stays high. Moreover, we show that TransFool is transferable to unknown target models. Finally, based on automatic and human evaluations, TransFool leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks both in whitebox and blackbox settings. Thus, TransFool permits us to better characterize the vulnerability of NMT models and outlines the necessity to design strong defense mechanisms and more robust NMT systems for reallife applications.
Computational Models of Solving Raven's Progressive Matrices A Comprehensive Introduction ; As being widely used to measure human intelligence, Raven's Progressive Matrices RPM tests also pose a great challenge for AI systems. There is a long line of computational models for solving RPM, starting from 1960s, either to understand the involved cognitive processes or solely for problemsolving purposes. Due to the dramatic paradigm shifts in AI researches, especially the advent of deep learning models in the last decade, the computational studies on RPM have also changed a lot. Therefore, now is a good time to look back at this long line of research. As the title a comprehensive introduction'' indicates, this paper provides an allinone presentation of computational models for solving RPM, including the history of RPM, intelligence testing theories behind RPM, item design and automatic item generation of RPMlike tasks, a conceptual chronicle of computational models for solving RPM, which reveals the philosophy behind the technology evolution of these models, and suggestions for transferring human intelligence testing and AI testing.
Infinitevolume states with irreducible localization sets for gradient models on trees ; We consider general classes of gradient models on regular trees with values in a countable Abelian group S such as mathbbZ or mathbbZq, in regimes of strong coupling or low temperature. This includes unbounded spin models like the pSOS model and finitealphabet clock models. We prove the existence of families of distinct homogeneous treeindexed Markov chain Gibbs states muA whose singlesite marginals concentrate on a given finite subset A subset S of spin values, under a strong coupling condition for the interaction, depending only on the cardinality vert A vert of A. The existence of such states is a new and robust phenomenon which is of particular relevance for infinite spin models. These states are not convex combinations of each other, and in particular the states with vert A vert geq 2 can not be decomposed into homogeneous Markovchain Gibbs states with a singlevalued concentration center. As a further application of the method we obtain moreover the existence of new types of mathbbZvalued gradient Gibbs states, whose singlesite marginals do not localize, but whose correlation structure depends on the finite set A.
Isospinviolating dark matter at liquid noble detectors new constraints, future projections, and an exploration of target complementarity ; There is no known reason that dark matter interactions with the Standard Model should couple to neutrons and protons in the same way. This isospin violation can have large consequences, modifying the sensitivity of existing and future direct detection experimental constraints by orders of magnitude. Previous works in the literature have focused on the zeromomentum limit which has its limitations when extending analysing the NonRelativistic Effective Field Theory basis NREFT. In this paper, we study isospin violation in a detailed manner, paying specific attention to the experimental setups of liquid noble detectors. We analyse two effective Standard Model gauge invariant models as interesting case studies as well as the more modelindependent NREFT operators. This work demonstrates the high degree of complementarity between the target nuclei xenon and argon. Most notably, we show that the Standard Model gaugeinvariant formulation of the standard spindependent interaction often generates a sizeable response from argon, a target nuclei with zero spin. This work is meant as an update and a useful reference to model builders and experimentalists.
Stabilized InContext Learning with Pretrained Language Models for Few Shot Dialogue State Tracking ; Promptbased methods with large pretrained language models PLMs have shown impressive unaided performance across many NLP tasks. These models improve even further with the addition of a few labeled incontext exemplars to guide output generation. However, for more complex tasks such as dialogue state tracking DST, designing prompts that reliably convey the desired intent is nontrivial, leading to unstable results. Furthermore, building incontext exemplars for dialogue tasks is difficult because conversational contexts are long while model input lengths are relatively short. To overcome these issues we first adapt a metalearning scheme to the dialogue domain which stabilizes the ability of the model to perform well under various prompts. We additionally design a novel training method to improve upon vanilla retrieval mechanisms to find ideal incontext examples. Finally, we introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query. In effect, we are able to achieve highly competitive results for fewshot DST on MultiWOZ.
Reinforcement Learning in the Wild with Maximum Likelihoodbased Model Transfer ; In this paper, we study the problem of transferring the available Markov Decision Process MDP models to learn and plan efficiently in an unknown but similar MDP. We refer to it as textitModel Transfer Reinforcement Learning MTRL problem. First, we formulate MTRL for discrete MDPs and Linear Quadratic Regulators LQRs with continuous state actions. Then, we propose a generic twostage algorithm, MLEMTRL, to address the MTRL problem in discrete and continuous settings. In the first stage, MLEMTRL uses a textitconstrained Maximum Likelihood Estimation MLEbased approach to estimate the target MDP model using a set of known MDP models. In the second stage, using the estimated target MDP model, MLEMTRL deploys a modelbased planning algorithm appropriate for the MDP class. Theoretically, we prove worstcase regret bounds for MLEMTRL both in realisable and nonrealisable settings. We empirically demonstrate that MLEMTRL allows faster learning in new MDPs than learning from scratch and achieves nearoptimal performance depending on the similarity of the available MDPs and the target MDP.
What happens before and after MultiEvent Commonsense in Event Coreference Resolution ; Event coreference models cluster event mentions pertaining to the same realworld event. Recent models rely on contextualized representations to recognize coreference among lexically or contextually similar mentions. However, models typically fail to leverage commonsense inferences, which is particularly limiting for resolving lexicallydivergent mentions. We propose a model that extends event mentions with temporal commonsense inferences. Given a complex sentence with multiple events, e.g., The man killed his wife and got arrested, with the target event arrested, our model generates plausible events that happen before the target event such as the police arrived, and after it, such as he was sentenced. We show that incorporating such inferences into an existing event coreference model improves its performance, and we analyze the coreferences in which such temporal knowledge is required.
GLUECons A Generic Benchmark for Learning Under Constraints ; Recent research has shown that integrating domain knowledge into deep learning architectures is effective it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models. However, the research community is missing a convened benchmark for systematically evaluating knowledge integration methods. In this work, we create a benchmark that is a collection of nine tasks in the domains of natural language processing and computer vision. In all cases, we model external knowledge as constraints, specify the sources of the constraints for each task, and implement various models that use these constraints. We report the results of these models using a new set of extended evaluation criteria in addition to the task performances for a more indepth analysis. This effort provides a framework for a more comprehensive and systematic comparison of constraint integration techniques and for identifying related research challenges. It will facilitate further research for alleviating some problems of stateoftheart neural models.
Enhancing Energy System Models Using Better Load Forecasts ; Energy system models require a large amount of technical and economic data, the quality of which significantly influences the reliability of the results. Some of the variables on the important data source ENTSOE transparency platform, such as transmission system operators' dayahead load forecasts, are known to be biased. These biases and high errors affect the quality of energy system models. We propose a simple time series model that does not require any input variables other than the load forecast history to significantly improve the transmission system operators' load forecast data on the ENTSOE transparency platform in realtime, i.e., we successively improve each incoming data point. We further present an energy system model developed specifically for the shortterm dayahead market. We show that the improved load data as inputs reduce pricing errors of the model, with strong reductions particularly in times when prices are high and the market is tight.
Kinetic models for systems of interacting agents with multiple microscopic states ; We propose and investigate general kinetic models of Boltzmann type with transition probabilities that can describe the simultaneous change of multiple microscopic states of the interacting agents. These models can be applied to many problems in socioeconomic sciences, where individuals may change both their compartment and their characteristic kinetic variable, as for instance kinetic models for epidemics or for international trade with possible transfers of agents. Mathematical properties of our kinetic model are proved, as existence and uniqueness of a solution for the Cauchy problem in suitable Wasserstein spaces. The quasiinvariant asymptotic regime, leading to simpler kinetic FokkerPlancktype equations, is investigated and commented on in comparison with other existing models. Some numerical tests are performed in order to show time evolution of distribution functions and of meaningful macroscopic fields, even in case of nonconstant interaction probabilities.
The Hardness of Optimization Problems on the Weighted Massively Parallel Computation Model ; The topologyaware Massively Parallel Computation MPC model is proposed and studied recently, which enhances the classical MPC model by the awareness of network topology. The work of Hu et al. on topologyaware MPC model considers only the tree topology. In this paper a more general case is considered, where the underlying network is a weighted complete graph. We then call this model as Weighted Massively Parallel Computation WMPC model, and study the problem of minimizing communication cost under it. Two communication cost minimization problems are defined based on different pattern of communication, which are the Data Redistribution Problem and Data Allocation Problem. We also define four kinds of objective functions for communication cost, which consider the total cost, bottleneck cost, maximum of send and receive cost, and summation of send and receive cost, respectively. Combining the two problems in different communication pattern with the four kinds of objective cost functions, 8 problems are obtained. The hardness results of the 8 problems make up the content of this paper. With rigorous proof, we prove that some of the 8 problems are in P, some FPT, some NPcomplete, and some W1complete.
ModelFree and LearningFree Proprioceptive Humanoid Movement Control ; This paper presents a novel modelfree method for humanoidrobot quasistatic movement control. Traditional modelbased methods often require precise robot model parameters. Additionally, existing learningbased frameworks often train the policy in simulation environments, thereby indirectly relying on a model. In contrast, we propose a proprioceptive framework based only on sensory outputs. It does not require prior knowledge of a robot's kinematic model or inertial parameters. Our method consists of three steps 1. Planning different pairs of center of pressure CoP and foot position objectives within a single cycle. 2. Searching around the current configuration by slightly moving the robot's leg joints back and forth while recording the sensor measurements of its CoP and foot positions. 3. Updating the robot motion with an optimization algorithm until all objectives are achieved. We demonstrate our approach on a NAO humanoid robot platform. Experiment results show that it can successfully generate stable robot motions.
Can We Use Diffusion Probabilistic Models for 3D Motion Prediction ; After many researchers observed fruitfulness from the recent diffusion probabilistic model, its effectiveness in image generation is actively studied these days. In this paper, our objective is to evaluate the potential of diffusion probabilistic models for 3D human motionrelated tasks. To this end, this paper presents a study of employing diffusion probabilistic models to predict future 3D human motions from the previously observed motion. Based on the Human 3.6M and HumanEvaI datasets, our results show that diffusion probabilistic models are competitive for both single deterministic and multiple stochastic 3D motion prediction tasks, after finishing a single training process. In addition, we find out that diffusion probabilistic models can offer an attractive compromise, since they can strike the right balance between the likelihood and diversity of the predicted future motions. Our code is publicly available on the project website httpssites.google.comviewdiffusionmotionprediction.
A Systematic Analysis of Vocabulary and BPE Settings for Optimal Finetuning of NMT A Case Study of Indomain Translation ; The effectiveness of Neural Machine Translation NMT models largely depends on the vocabulary used at training; small vocabularies can lead to outofvocabulary problems large ones, to memory issues. Subword SW tokenization has been successfully employed to mitigate these issues. The choice of vocabulary and SW tokenization has a significant impact on both training and finetuning an NMT model. Finetuning is a common practice in optimizing an MT model with respect to new data. However, new data potentially introduces new words or tokens, which, if not taken into consideration, may lead to suboptimal performance. In addition, the distribution of tokens in the new data can differ from the distribution of the original data. As such, the original SW tokenization model could be less suitable for the new data. Through a systematic empirical evaluation, in this work we compare different strategies for SW tokenization and vocabulary generation with the ultimate goal to uncover an optimal setting for finetuning a domainspecific model. Furthermore, we developed several indomain models, the best of which achieves 6 BLEU points improvement over the baseline.
How will Language Modelers like ChatGPT Affect Occupations and Industries ; Recent dramatic increases in AI language modeling capabilities has led to many questions about the effect of these technologies on the economy. In this paper we present a methodology to systematically assess the extent to which occupations, industries and geographies are exposed to advances in AI language modeling capabilities. We find that the top occupations exposed to language modeling include telemarketers and a variety of postsecondary teachers such as English language and literature, foreign language and literature, and history teachers. We find the top industries exposed to advances in language modeling are legal services and securities, commodities, and investments. We also find a positive correlation between wages and exposure to AI language modeling.
Do Machine Learning Models Learn Statistical Rules Inferred from Data ; Machine learning models can make critical errors that are easily hidden within vast amounts of data. Such errors often run counter to rules based on human intuition. However, rules based on human knowledge are challenging to scale or to even formalize. We thereby seek to infer statistical rules from the data and quantify the extent to which a model has learned them. We propose a framework SQRL that integrates logicbased methods with statistical inference to derive these rules from a model's training data without supervision. We further show how to adapt models at test time to reduce rule violations and produce more coherent predictions. SQRL generates up to 300K rules over datasets from vision, tabular, and language settings. We uncover up to 158K violations of those rules by stateoftheart models for classification, object detection, and data imputation. Testtime adaptation reduces these violations by up to 68.7 with relative performance improvement up to 32. SQRL is available at httpsgithub.comDebugMLsqrl.
A Physicsbased and Datadriven Approach for Localized Statistical Channel Modeling ; Localized channel modeling is crucial for offline performance optimization of 5G cellular networks, but the existing channel models are for general scenarios and do not capture local geographical structures. In this paper, we propose a novel physicsbased and datadriven localized statistical channel modeling LSCM, which is capable of sensing the physical geographical structures of the targeted cellular environment. The proposed channel modeling solely relies on the reference signal receiving power RSRP of the user equipment, unlike the traditional methods which use full channel impulse response matrices. The key is to build the relationship between the RSRP and the channel's angular power spectrum. Based on it, we formulate the task of channel modeling as a sparse recovery problem where the nonzero entries of the sparse vector indicate the channel paths' powers and angles of departure. A computationally efficient weighted nonnegative orthogonal matching pursuit WNOMP algorithm is devised for solving the formulated problem. Finally, experiments based on synthetic and real RSRP measurements are presented to examine the performance of the proposed method.
A MultiGrained SelfInterpretable SymbolicNeural Model For SingleMultiLabeled Text Classification ; Deep neural networks based on layerstacking architectures have historically suffered from poor inherent interpretability. Meanwhile, symbolic probabilistic models function with clear interpretability, but how to combine them with neural networks to enhance their performance remains to be explored. In this paper, we try to marry these two systems for text classification via a structured language model. We propose a SymbolicNeural model that can learn to explicitly predict class labels of text spans from a constituency tree without requiring any access to spanlevel gold labels. As the structured language model learns to predict constituency trees in a selfsupervised manner, only raw texts and sentencelevel labels are required as training data, which makes it essentially a general constituentlevel selfinterpretable classification model. Our experiments demonstrate that our approach could achieve good prediction accuracy in downstream tasks. Meanwhile, the predicted span labels are consistent with human rationales to a certain degree.
Dynamical systems analysis of fQ gravity ; Modified gravity theories can be used for the description of homogeneous and isotropic cosmological models through the corresponding field equations. These can be cast into systems of autonomous differential equations because of their sole dependence on a well chosen time variable, be it the cosmological time, or an alternative. For that reason a dynamical systems approach offers a reliable route to study those equations. Through a model independent set of variables we are able to study all fQ modified gravity models. The drawback of the procedure is a more complicated constraint equation. However, it allows the dynamical system to be formulated in fewer dimensions than using other approaches. We focus on a recent model of interest, the powerexponential model, and generalise the fluid content of the model.
OpenEnded Medical Visual Question Answering Through Prefix Tuning of Language Models ; Medical Visual Question Answering VQA is an important challenge, as it would lead to faster and more accurate diagnoses and treatment decisions. Most existing methods approach it as a multiclass classification problem, which restricts the outcome to a predefined closedset of curated answers. We focus on openended VQA and motivated by the recent advances in language models consider it as a generative task. Leveraging pretrained language models, we introduce a novel method particularly suited for small, domainspecific, medical datasets. To properly communicate the medical images to the language model, we develop a network that maps the extracted visual features to a set of learnable tokens. Then, alongside the question, these learnable tokens directly prompt the language model. We explore recent parameterefficient finetuning strategies for language models, which allow for resource and dataefficient finetuning. We evaluate our approach on the prime medical VQA benchmarks, namely, Slake, OVQA and PathVQA. The results demonstrate that our approach outperforms existing methods across various training settings while also being computationally efficient.
Software Vulnerability Prediction Knowledge Transferring Between Programming Languages ; Developing automated and smart software vulnerability detection models has been receiving great attention from both research and development communities. One of the biggest challenges in this area is the lack of code samples for all different programming languages. In this study, we address this issue by proposing a transfer learning technique to leverage available datasets and generate a model to detect common vulnerabilities in different programming languages. We use C source code samples to train a Convolutional Neural Network CNN model, then, we use Java source code samples to adopt and evaluate the learned model. We use code samples from two benchmark datasets NIST Software Assurance Reference Dataset SARD and Draper VDISC dataset. The results show that proposed model detects vulnerabilities in both C and Java codes with average recall of 72. Additionally, we employ explainable AI to investigate how much each feature contributes to the knowledge transfer mechanisms between C and Java in the proposed model.
Transformerbased World Models Are Happy With 100k Interactions ; Deep neural networks have been successful in many reinforcement learning settings. However, compared to human learners they are overly data hungry. To build a sampleefficient world model, we apply a transformer to realworld episodes in an autoregressive manner not only the compact latent states and the taken actions but also the experienced or predicted rewards are fed into the transformer, so that it can attend flexibly to all three modalities at different time steps. The transformer allows our world model to access previous states directly, instead of viewing them through a compressed recurrent state. By utilizing the TransformerXL architecture, it is able to learn longterm dependencies while staying computationally efficient. Our transformerbased world model TWM generates meaningful, new experience, which is used to train a policy that outperforms previous modelfree and modelbased reinforcement learning algorithms on the Atari 100k benchmark.
BeamAttack Generating Highquality Textual Adversarial Examples through Beam Search and Mixed Semantic Spaces ; Natural language processing models based on neural networks are vulnerable to adversarial examples. These adversarial examples are imperceptible to human readers but can mislead models to make the wrong predictions. In a blackbox setting, attacker can fool the model without knowing model's parameters and architecture. Previous works on wordlevel attacks widely use single semantic space and greedy search as a search strategy. However, these methods fail to balance the attack success rate, quality of adversarial examples and time consumption. In this paper, we propose BeamAttack, a textual attack algorithm that makes use of mixed semantic spaces and improved beam search to craft highquality adversarial examples. Extensive experiments demonstrate that BeamAttack can improve attack success rate while saving numerous queries and time, e.g., improving at most 7 attack success rate than greedy search when attacking the examples from MR dataset. Compared with heuristic search, BeamAttack can save at most 85 model queries and achieve a competitive attack success rate. The adversarial examples crafted by BeamAttack are highly transferable and can effectively improve model's robustness during adversarial training. Code is available at httpsgithub.comzhuhaiustcbeamattacktreemaster
DualFair Fair Representation Learning at Both Group and Individual Levels via Contrastive Selfsupervision ; Algorithmic fairness has become an important machine learning problem, especially for missioncritical Web applications. This work presents a selfsupervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations. Unlike existing models that target a single type of fairness, our model jointly optimizes for two fairness criteria group fairness and counterfactual fairness and hence makes fairer predictions at both the group and individual levels. Our model uses contrastive loss to generate embeddings that are indistinguishable for each protected group, while forcing the embeddings of counterfactual pairs to be similar. It then uses a selfknowledge distillation method to maintain the quality of representation for the downstream tasks. Extensive analysis over multiple datasets confirms the model's validity and further shows the synergy of jointly addressing two fairness criteria, suggesting the model's potential value in fair intelligent Web applications.
Distributionfree Deviation Bounds of Learning via Model Selection with Crossvalidation Risk Estimation ; Crossvalidation techniques for risk estimation and model selection are widely used in statistics and machine learning. However, the understanding of the theoretical properties of learning via model selection with crossvalidation risk estimation is quite low in face of its widespread use. In this context, this paper presents learning via model selection with crossvalidation risk estimation as a general systematic learning framework within classical statistical learning theory and establishes distributionfree deviation bounds in terms of VC dimension, giving detailed proofs of the results and considering both bounded and unbounded loss functions. We also deduce conditions under which the deviation bounds of learning via model selection are tighter than that of learning via empirical risk minimization in the whole hypotheses space, supporting the better performance of model selection frameworks observed empirically in some instances.
Understanding Posthoc Explainers The Case of Anchors ; In many scenarios, the interpretability of machine learning models is a highly required but difficult task. To explain the individual predictions of such models, local modelagnostic approaches have been proposed. However, the process generating the explanations can be, for a user, as mysterious as the prediction to be explained. Furthermore, interpretability methods frequently lack theoretical guarantees, and their behavior on simple models is frequently unknown. While it is difficult, if not impossible, to ensure that an explainer behaves as expected on a cuttingedge model, we can at least ensure that everything works on simple, already interpretable models. In this paper, we present a theoretical analysis of Anchors Ribeiro et al., 2018 a popular rulebased interpretability method that highlights a small set of words to explain a text classifier's decision. After formalizing its algorithm and providing useful insights, we demonstrate mathematically that Anchors produces meaningful results when used with linear text classifiers on top of a TFIDF vectorization. We believe that our analysis framework can aid in the development of new explainability methods based on solid theoretical foundations.
Cosmological implication of fT gravity models through phase space analysis ; In this paper, we have performed the dynamical system analysis of fT gravity cosmological models at both background and perturbation levels. We have presented three models pertaining to three distinct functional forms of fT. The first form is that of the logarithmic form of the torsion scalar T, the second one is in the power law form, and the third one is the combination of the first two forms. For all these three forms of fT, we have derived the corresponding cosmological parameters in terms of the dynamical variables. Subsequently, the critical points are obtained and the conditions of its existence has been derived. Critical points of each model have been analysed individually and the corresponding cosmology are derived. The stability behaviour of these critical points are discussed from the behaviour of the eigenvalues and the phase portraits. At least one stable node has been obtained in each of these models. Further from the evolution plots of the cosmological parameters, the accelerating behaviour of the cosmological models are also verified
The WaveParticle Duality in a Quantum Heat Engine ; According to the waveparticle duality WPD, quantum systems show both particle and wavelike behavior, and cannot be described using only one of these classical concepts. Identifying quantum features that cannot be reproduced by any classical means is key for quantum technology. This task is often pursued by comparing the quantum system of interest to a suitable classical counterpart. However, the WPD implies that a comparison to a single classical model is generally insufficient; at least one wave and one particle model should be considered. Here we exploit this insight and contrast a bosonic quantum heat engine with two classical counterparts, one based on waves and one based on particles. While both classical models reproduce the average output power of the quantum engine, neither reproduces its fluctuations. The wave model fails to capture the vacuum fluctuations while the particle model cannot reproduce bunching to its full extent. We find regimes where wave and particle descriptions agree with the quantum one, as well as a regime where neither classical model is adequate, revealing the role of the WPD in nonequilibrium bosonic transport.
Speech Modeling with a Hierarchical Transformer Dynamical VAE ; The dynamical variational autoencoders DVAEs are a family of latentvariable deep generative models that extends the VAE to model a sequence of observed data and a corresponding sequence of latent vectors. In almost all the DVAEs of the literature, the temporal dependencies within each sequence and across the two sequences are modeled with recurrent neural networks. In this paper, we propose to model speech signals with the Hierarchical Transformer DVAE HiTDVAE, which is a DVAE with two levels of latent variable sequencewise and framewise and in which the temporal dependencies are implemented with the Transformer architecture. We show that HiTDVAE outperforms several other DVAEs for speech spectrogram modeling, while enabling a simpler training procedure, revealing its high potential for downstream lowlevel speech processing tasks such as speech enhancement.
Explainable GeoAI Can saliency maps help interpret artificial intelligence's learning process An empirical study on natural feature detection ; Improving the interpretability of geospatial artificial intelligence GeoAI models has become critically important to open the black box of complex AI models, such as deep learning. This paper compares popular saliency map generation techniques and their strengths and weaknesses in interpreting GeoAI and deep learning models' reasoning behaviors, particularly when applied to geospatial analysis and image processing tasks. We surveyed two broad classes of model explanation methods perturbationbased and gradientbased methods. The former identifies important image areas, which help machines make predictions by modifying a localized area of the input image. The latter evaluates the contribution of every single pixel of the input image to the model's prediction results through gradient backpropagation. In this study, three algorithmsthe occlusion method, the integrated gradients method, and the class activation map methodare examined for a natural feature detection task using deep learning. The algorithms' strengths and weaknesses are discussed, and the consistency between modellearned and humanunderstandable concepts for object recognition is also compared. The experiments used two GeoAIready datasets to demonstrate the generalizability of the research findings.
Econotaxis in modeling urbanization by labor force migration ; Individual participants in human society collectively exhibit aggregation behavior. In this study, we present a simple microscopic model of labor force migration by employing the active Brownian particles framework. Through agentbased simulations, we find that our model produces clusters of agents from a random initial distribution. Furthermore, two empirical regularities called Zipf's and Okun's laws were observed in our model. To reveal the mechanism underlying the reproduced agglomeration phenomena, we derived an extended KellerSegel system, a classic model that describes the aggregation behavior of biological organisms called taxis, from our microscopic model. The obtained macroscopic system indicates that the agglomeration of the workforce in real world can be accounted for through a new type of taxis central to human behavior, which highlights the relevance of urbanization to blowup phenomena in the derived PDE system. We term it econotaxis.
Rethinking WhiteBox Watermarks on Deep Learning Models under Neural Structural Obfuscation ; Copyright protection for deep neural networks DNNs is an urgent need for AI corporations. To trace illegally distributed model copies, DNN watermarking is an emerging technique for embedding and verifying secret identity messages in the prediction behaviors or the model internals. Sacrificing less functionality and involving more knowledge about the target DNN, the latter branch called textitwhitebox DNN watermarking is believed to be accurate, credible and secure against most known watermark removal attacks, with emerging research efforts in both the academy and the industry. In this paper, we present the first systematic study on how the mainstream whitebox DNN watermarks are commonly vulnerable to neural structural obfuscation with textitdummy neurons, a group of neurons which can be added to a target model but leave the model behavior invariant. Devising a comprehensive framework to automatically generate and inject dummy neurons with high stealthiness, our novel attack intensively modifies the architecture of the target model to inhibit the success of watermark verification. With extensive evaluation, our work for the first time shows that nine published watermarking schemes require amendments to their verification procedures.
Neuralprior stochastic block model ; The stochastic block model SBM is widely studied as a benchmark for graph clustering aka community detection. In practice, graph data often come with node attributes that bear additional information about the communities. Previous works modeled such data by considering that the node attributes are generated from the node community memberships. In this work, motivated by a recent surge of works in signal processing using deep neural networks as priors, we propose to model the communities as being determined by the node attributes rather than the opposite. We define the corresponding model; we call it the neuralprior SBM. We propose an algorithm, stemming from statistical physics, based on a combination of belief propagation and approximate message passing. We analyze the performance of the algorithm as well as the Bayesoptimal performance. We identify detectability and exact recovery phase transitions, as well as an algorithmically hard region. The proposed model and algorithm can be used as a benchmark for both theory and algorithms. To illustrate this, we compare the optimal performances to the performance of simple graph neural networks.
Symmetryadapted modeling for molecules and crystals ; We have developed a symmetryadapted modeling procedure for molecules and crystals. By using the completeness of multipoles to express spatial and timereversal parityspecific anisotropic distributions, we can generate systematically the complete symmetryadapted multipole basis set to describe any of electronic degrees of freedom in isolated cluster systems and periodic crystals. The symmetryadapted modeling is then achieved by expressing the Hamiltonian in terms of the linear combination of these bases belonging to the identity irreducible representation, and the model parameters linear coefficients in the Hamiltonian can be determined so as to reproduce the electronic structures given by the densityfunctional computation. We demonstrate our method for the modeling of graphene, and emphasize usefulness of the symmetryadapted basis to analyze and predict physical phenomena and spontaneous symmetry breaking in a phase transition. The present method is complementary to defacto standard Wannier tightbinding modeling, and it provides us with a fundamental basis to develop a symmetrybased analysis for materials science.
Fitting Lowrank Models on Egocentrically Sampled Partial Networks ; The statistical modeling of random networks has been widely used to uncover interaction mechanisms in complex systems and to predict unobserved links in realworld networks. In many applications, network connections are collected via egocentric sampling a subset of nodes is sampled first, after which all links involving this subset are recorded; all other information is missing. Compared with the assumption of uniformly missing at random, egocentrically sampled partial networks require specially designed modeling strategies. Current statistical methods are either computationally infeasible or based on intuitive designs without theoretical justification. Here, we propose an approach to fit general lowrank models for egocentrically sampled networks, which include several popular network models. This method is based on graph spectral properties and is computationally efficient for largescale networks. It results in consistent recovery of missing subnetworks due to egocentric sampling for sparse networks. To our knowledge, this method offers the first theoretical guarantee for egocentric partial network estimation in the scope of lowrank models. We evaluate the technique on several synthetic and realworld networks and show that it delivers competitive performance in link prediction tasks.
Manipulating Transfer Learning for Property Inference ; Transfer learning is a popular method for tuning pretrained upstream models for different downstream tasks using limited data and computational resources. We study how an adversary with control over an upstream model used in transfer learning can conduct property inference attacks on a victim's tuned downstream model. For example, to infer the presence of images of a specific individual in the downstream training set. We demonstrate attacks in which an adversary can manipulate the upstream model to conduct highly effective and specific property inference attacks AUC score 0.9, without incurring significant performance loss on the main task. The main idea of the manipulation is to make the upstream model generate activations intermediate features with different distributions for samples with and without a target property, thus enabling the adversary to distinguish easily between downstream models trained with and without training examples that have the target property. Our code is available at httpsgithub.comyulongt23TransferInference.
Enabling Calibration In The ZeroShot Inference of Large VisionLanguage Models ; Calibration of deep learning models is crucial to their trustworthiness and safe usage, and as such, has been extensively studied in supervised classification models, with methods crafted to decrease miscalibration. However, there has yet to be a comprehensive study of the calibration of visionlanguage models that are used for zeroshot inference, like CLIP. We measure calibration across relevant variables like prompt, dataset, and architecture, and find that zeroshot inference with CLIP is miscalibrated. Furthermore, we propose a modified version of temperature scaling that is aligned with the common use cases of CLIP as a zeroshot inference model, and show that a single learned temperature generalizes for each specific CLIP model defined by a chosen pretraining dataset and architecture across inference dataset and prompt choice.
Staggered bosons ; A model with a half boson degree of freedom per lattice site in one dimension is developed. The boson is protected from developing a gap by translation symmetry while the left movers are at zero quasimomentum, the associated right movers are at the midpoint of the quasimomentum period. The model has different properties depending on if a periodic lattice has an even or an odd number of sites and similar features are found for open boundary conditions. A special case of the nonlinear half boson model where even and odd lattice sites contribute differently to the Hamiltonian gives rise to the Toda chain and a more symmetric generalization of the Toda chain is found. Upon periodic identifications of the half bosons degrees of freedom under a shift, the total Hilbert space has a finite dimension and can be encoded in finitely many qubits per unit length. This way one finds interesting critical spin chains, examples of which include the critical Ising model in a transverse magnetic field and the 3state Potts model at criticality. Extensions to higher dimensions are considered. Models obtained this way automatically produce dynamical systems of gapless fractons.
Scaling Expert Language Models with Unsupervised Domain Discovery ; Large language models are typically trained densely all parameters are updated with respect to all inputs. This requires synchronization of billions of parameters across thousands of GPUs. We introduce a simple but effective method to asynchronously train large, sparse language models on arbitrary text corpora. Our method clusters a corpus into sets of related documents, trains a separate expert language model on each cluster, and combines them in a sparse ensemble for inference. This approach generalizes embarrassingly parallel training by automatically discovering the domains for each expert, and eliminates nearly all the communication overhead of existing sparse language models. Our technique outperforms dense baselines on multiple corpora and fewshot tasks, and our analysis shows that specializing experts to meaningful clusters is key to these gains. Performance also improves with the number of experts and size of training data, suggesting this is a highly efficient and accessible approach to training large language models.
Exploring Continual Learning of Diffusion Models ; Diffusion models have achieved remarkable success in generating highquality images thanks to their novel training procedures applied to unprecedented amounts of data. However, training a diffusion model from scratch is computationally expensive. This highlights the need to investigate the possibility of training these models iteratively, reusing computation while the data distribution changes. In this study, we take the first step in this direction and evaluate the continual learning CL properties of diffusion models. We begin by benchmarking the most common CL methods applied to Denoising Diffusion Probabilistic Models DDPMs, where we note the strong performance of the experience replay with the reduced rehearsal coefficient. Furthermore, we provide insights into the dynamics of forgetting, which exhibit diverse behavior across diffusion timesteps. We also uncover certain pitfalls of using the bitsperdimension metric for evaluating CL.
Flatband ferromagnetism in the SUN Hubbard and Kondo lattice models ; We develop a general theory of flatband ferromagnetism in the SUN FermiHubbard model, which describes the behavior of Ncomponent fermions with SUN symmetric interactions. We focus on the case where the singleparticle spectrum has a flat band and establish a necessary and sufficient condition for the SUN Hubbard model to exhibit ferromagnetism when the number of particles is the same as the degeneracy. We show that the occurrence of ferromagnetism is equivalent to the irreducibility of the projection matrix onto the space of singleparticle ground states. We also demonstrate that this result can be exploited to establish a rigorous result for the ferromagnetic SUN Kondo lattice model with a flat band. Specifically, we prove that when the SUN Hubbard model is ferromagnetic, the ferromagnetic SUN Kondo lattice model with the same hopping matrix also exhibits SUN ferromagnetism.
Are Datadriven Explanations Robust against Outofdistribution Data ; As blackbox models increasingly power highstakes applications, a variety of datadriven explanation methods have been introduced. Meanwhile, machine learning models are constantly challenged by distributional shifts. A question naturally arises Are datadriven explanations robust against outofdistribution data Our empirical results show that even though predict correctly, the model might still yield unreliable explanations under distributional shifts. How to develop robust explanations against outofdistribution data To address this problem, we propose an endtoend modelagnostic learning framework Distributionally Robust Explanations DRE. The key idea is, inspired by selfsupervised learning, to fully utilizes the interdistribution information to provide supervisory signals for the learning of explanations without human annotation. Can robust explanations benefit the model's generalization capability We conduct extensive experiments on a wide range of tasks and data types, including classification and regression on image and scientific tabular data. Our results demonstrate that the proposed method significantly improves the model's performance in terms of explanation and prediction robustness against distributional shifts.
ProtFIM FillinMiddle Protein Sequence Design via Protein Language Models ; Protein language models pLMs, pretrained via causal language modeling on protein sequences, have been a promising tool for protein sequence design. In realworld protein engineering, there are many cases where the amino acids in the middle of a protein sequence are optimized while maintaining other residues. Unfortunately, because of the lefttoright nature of pLMs, existing pLMs modify suffix residues by prompting prefix residues, which are insufficient for the infilling task that considers the whole surrounding context. To find the more effective pLMs for protein engineering, we design a new benchmark, Secondary structureE InFilling rEcoveRy, SEIFER, which approximates infilling sequence design scenarios. With the evaluation of existing models on the benchmark, we reveal the weakness of existing language models and show that language models trained via fillinmiddle transformation, called ProtFIM, are more appropriate for protein engineering. Also, we prove that ProtFIM generates protein sequences with decent protein representations through exhaustive experiments and visualizations.
Label Propagation via Random Walk for Training Robust Thalamus Nuclei Parcellation Model from Noisy Annotations ; Datadriven thalamic nuclei parcellation depends on highquality manual annotations. However, the small size and low contrast changes among thalamic nuclei, yield annotations that are often incomplete, noisy, or ambiguously labelled. To train a robust thalamic nuclei parcellation model with noisy annotations, we propose a label propagation algorithm based on random walker to refine the annotations before model training. A twostep model was trained to generate first the whole thalamus and then the nuclei masks. We conducted experiments on a mild traumatic brain injurymTBI dataset with noisy thalamic nuclei annotations. Our model outperforms current stateoftheart thalamic nuclei parcellations by a clear margin. We believe our method can also facilitate the training of other parcellation models with noisy labels.
Evaluation of GPT and BERTbased models on identifying proteinprotein interactions in biomedical text ; Detecting proteinprotein interactions PPIs is crucial for understanding genetic mechanisms, disease pathogenesis, and drug design. However, with the fastpaced growth of biomedical literature, there is a growing need for automated and accurate extraction of PPIs to facilitate scientific knowledge discovery. Pretrained language models, such as generative pretrained transformer GPT and bidirectional encoder representations from transformers BERT, have shown promising results in natural language processing NLP tasks. We evaluated the PPI identification performance of various GPT and BERT models using a manually curated benchmark corpus of 164 PPIs in 77 sentences from learning language in logic LLL. BERTbased models achieved the best overall performance, with PubMedBERT achieving the highest precision 85.17 and F1score 86.47 and BioMALBERT achieving the highest recall 93.83. Despite not being explicitly trained for biomedical texts, GPT4 achieved comparable performance to the best BERT models with 83.34 precision, 76.57 recall, and 79.18 F1score. These findings suggest that GPT models can effectively detect PPIs from text data and have the potential for use in biomedical literature mining tasks.
Robust Deep Learning Framework for ConstitutiveRelation Modeling ; Modeling the fullrange deformation behaviors of materials under complex loading and materials conditions is a significant challenge for constitutive relations CRs modeling. We propose a general encoderdecoder deep learning framework that can model highdimensional stressstrain data and complex loading histories with robustness and universal capability. The framework employs an encoder to project highdimensional input information e.g., loading history, loading conditions, and materials information to a lowerdimensional hidden space and a decoder to map the hidden representation to the stress of interest. We evaluated various encoder architectures, including gated recurrent unit GRU, GRU with attention, temporal convolutional network TCN, and the Transformer encoder, on two complex stressstrain datasets that were designed to include a wide range of complex loading histories and loading conditions. All architectures achieved excellent test results with an RMSE below 1 MPa. Additionally, we analyzed the capability of the different architectures to make predictions on outofdomain applications, with an uncertainty estimation based on deep ensembles. The proposed approach provides a robust alternative to empiricalsemiempirical models for CRs modeling, offering the potential for more accurate and efficient materials design and optimization.
An Interpretable Loan Credit Evaluation Method Based on Rule Representation Learner ; The interpretability of model has become one of the obstacles to its wide application in the highstake fields. The usual way to obtain interpretability is to build a blackbox first and then explain it using the posthoc methods. However, the explanations provided by the posthoc method are not always reliable. Instead, we design an intrinsically interpretable model based on RRLRule Representation Learner for the Lending Club dataset. Specifically, features can be divided into three categories according to their characteristics of themselves and build three subnetworks respectively, each of which is similar to a neural network with a single hidden layer but can be equivalently converted into a set of rules. During the training, we learned tricks from previous research to effectively train binary weights. Finally, our model is compared with the treebased model. The results show that our model is much better than the interpretable decision tree in performance and close to other blackbox, which is of practical significance to both financial institutions and borrowers. More importantly, our model is used to test the correctness of the explanations generated by the posthoc method, the results show that the posthoc method is not always reliable.
Effects of Nonminimal Mattergeometry Coupling on Embedding Classone Anisotropic Solutions ; This paper investigates some particular anisotropic star models in fmathcalR,mathcalT,mathcalQ gravity, where mathcalQmathcalRomegaalphamathcalTomegaalpha. We adopt a standard model fmathcalR,mathcalT,mathcalQmathcalRvarpimathcalQ, where varpi indicates a coupling constant. We take spherically symmetric spacetime and develop solutions to the modified field equations corresponding to different choices of the matter Lagrangian by applying embedding classone' scheme. For this purpose, we utilize mathbbMIT bag model equation of state and investigate some physical aspects of compact models such as RXJ 185637,4U 182030,Cen X3,SAX J 1808.43658 and Her XI. We use masses and radii of these stars and employ the vanishing radial pressure condition at the boundary to calculate the value of their respective bag constant mathfrakBc. Further, we fix varpipm4 to analyze the behavior of resulting state variables, anisotropy, mass, compactness, surface redshift as well as energy bounds through graphical interpretation for each star model. Two different physical tests are performed to check the stability of the developed solutions. We conclude that varpi4 is more suitable choice for the considered modified model to obtain stable structures of the compact bodies.
Characterizing the contribution of dependent features in XAI methods ; Explainable Artificial Intelligence XAI provides tools to help understanding how the machine learning models work and reach a specific outcome. It helps to increase the interpretability of models and makes the models more trustworthy and transparent. In this context, many XAI methods were proposed being SHAP and LIME the most popular. However, the proposed methods assume that used predictors in the machine learning models are independent which in general is not necessarily true. Such assumption casts shadows on the robustness of the XAI outcomes such as the list of informative predictors. Here, we propose a simple, yet useful proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the predictors. The proposed approach has the advantage of being modelagnostic as well as simple to calculate the impact of each predictor in the model in presence of collinearity.
Mathematical Model for Transmission Dynamics of Tuberculosis in Burundi ; Tuberculosis TB is among the main public health challenges in Burundi. The literature lacks mathematical models for key parameter estimates of TB transmission dynamics in Burundi. In this paper, the supectibleexposedinfectedrecovered SEIR model is used to investigate the transmission dynamics of tuberculosis in Burundi. Using the next generation method, we calculated the basic reproduction number, R0. The model is demonstrated to have a diseasefree equilibrium DEF that is locally and globally asymptotically stable. When the corresponding reproduction threshold quantity approaches unity, the model enters an endemic equilibrium EE. That means, the disease can be controlled through different interventions in Burundi. A sensitivity analysis of the model parameters was also investigated. It shows that the progression rate from latent to becoming infectious had the highest positive sensitivity, which means that R0 increases and decreases proportionally with an increase and a decrease of that progression rate.
Multilevel Optimization for Policy Design with AgentBased Epidemic Models ; Epidemiological models can not only be used to forecast the course of a pandemic like COVID19, but also to propose and design nonpharmaceutical interventions such as school and work closing. In general, the design of optimal policies leads to nonlinear optimization problems that can be solved by numerical algorithms. Epidemiological models come in different complexities, ranging from systems of simple ordinary differential equations ODEs to complex agentbased models ABMs. The former allow a fast and straightforward optimization, but are limited in accuracy, detail, and parameterization, while the latter can resolve spreading processes in detail, but are extremely expensive to optimize. We consider policy optimization in a prototypical situation modeled as both ODE and ABM, review numerical optimization approaches, and propose a heterogeneous multilevel approach based on combining a fineresolution ABM and a coarse ODE model. Numerical experiments, in particular with respect to convergence speed, are given for illustrative examples.
Modelling customer lifetimevalue in the retail banking industry ; Understanding customer lifetime value is key to nurturing longterm customer relationships, however, estimating it is far from straightforward. In the retail banking industry, commonly used approaches rely on simple heuristics and do not take advantage of the high predictive ability of modern machine learning techniques. We present a general framework for modelling customer lifetime value which may be applied to industries with longlasting contractual and productcentric customer relationships, of which retail banking is an example. This framework is novel in facilitating CLV predictions over arbitrary time horizons and productbased propensity models. We also detail an implementation of this model which is currently in production at a large UK lender. In testing, we estimate an 43 improvement in outoftime CLV prediction error relative to a popular baseline approach. Propensity models derived from our CLV model have been used to support customer contact marketing campaigns. In testing, we saw that the top 10 of customers ranked by their propensity to take up investment products were 3.2 times more likely to take up an investment product in the next year than a customer chosen at random.
Statistical and computational rates in high rank tensor estimation ; Higherorder tensor datasets arise commonly in recommendation systems, neuroimaging, and social networks. Here we develop probable methods for estimating a possibly high rank signal tensor from noisy observations. We consider a generative latent variable tensor model that incorporates both high rank and low rank models, including but not limited to, simple hypergraphon models, single index models, lowrank CP models, and lowrank Tucker models. Comprehensive results are developed on both the statistical and computational limits for the signal tensor estimation. We find that highdimensional latent variable tensors are of logrank; the fact explains the pervasiveness of lowrank tensors in applications. Furthermore, we propose a polynomialtime spectral algorithm that achieves the computationally optimal rate. We show that the statisticalcomputational gap emerges only for latent variable tensors of order 3 or higher. Numerical experiments and two real data applications are presented to demonstrate the practical merits of our methods.
Proof of a Stable Fixed Point for Strongly Correlated Electron Matter ; We establish the HatsugaiKohmoto model as a stable quartic fixed point distinct from WilsonFisher by computing the betafunction in the presence of perturbing local interactions. In vicinity of the halffilled doped Mott state, the betafunction vanishes for all local interactions regardless of their sign. The only flow away from the HK model is through the superconducting channel which lifts the spin degeneracy as does any ordering tendency. The superconducting instability is identical to that established previouslycitenat1. A corollary of this work is that Hubbard repulsive interactions flow into the HK stable fixed point in the vicinity of halffilling. Consequently, although the HK model has alltoall interactions, nothing local destroys it. The consilience with Hubbard arises because both models break the Z2 symmetry on a Fermi surface, the HK model being the simplest to do so. Indeed, the simplicity of the HK model belies its robustness and generality.
Critical States Generators from Perturbed Flatbands ; Onedimensional allbandsflat lattices are networks with all bands being flat and highly degenerate. They can always be diagonalized by a finite sequence of local unitary transformations parameterized by a set of angles thetai. In our previous work, Ref.onlinecitelee2023critical, we demonstrated that quasiperiodic perturbations of the onedimensional allbandsflat lattice with thetai pi4 give rise to a criticaltoinsulator transition and fractality edges separating critical from localized states. In this study we consider the full range of angles thetais available for the allbandsflat model and study the effect of the quasiperiodic perturbation. For weak perturbation, we derive an effective Hamiltonian and we identify the sets of thetais for which the effective model maps to extended or offdiagonal Harper models and hosts critical states. For all the other values of the angles the spectrum is localized. Upon increasing the perturbation strength, the extended Harper model evolves into the system with energy dependent criticaltoinsulator transitions, that we dub emphfractality edges. The case where the effective model maps onto the offdiagonal Harper model features a criticaltoinsulator transition at a finite disorder strength.