text
stringlengths
62
2.94k
Dissenting Explanations Leveraging Disagreement to Reduce Model Overreliance ; While explainability is a desirable characteristic of increasingly complex blackbox models, modern explanation methods have been shown to be inconsistent and contradictory. The semantics of explanations is not always fully understood to what extent do explanations explain a decision and to what extent do they merely advocate for a decision Can we help humans gain insights from explanations accompanying correct predictions and not overrely on incorrect predictions advocated for by explanations With this perspective in mind, we introduce the notion of dissenting explanations conflicting predictions with accompanying explanations. We first explore the advantage of dissenting explanations in the setting of model multiplicity, where multiple models with similar performance may have different predictions. In such cases, providing dissenting explanations could be done by invoking the explanations of disagreeing models. Through a pilot study, we demonstrate that dissenting explanations reduce overreliance on model predictions, without reducing overall accuracy. Motivated by the utility of dissenting explanations we present both global and local methods for their generation.
FedCME Client Matching and Classifier Exchanging to Handle Data Heterogeneity in Federated Learning ; Data heterogeneity across clients is one of the key challenges in Federated Learning FL, which may slow down the global model convergence and even weaken global model performance. Most existing approaches tackle the heterogeneity by constraining local model updates through reference to global information provided by the server. This can alleviate the performance degradation on the aggregated global model. Different from existing methods, we focus the information exchange between clients, which could also enhance the effectiveness of local training and lead to generate a highperformance global model. Concretely, we propose a novel FL framework named FedCME by client matching and classifier exchanging. In FedCME, clients with large differences in data distribution will be matched in pairs, and then the corresponding pair of clients will exchange their classifiers at the stage of local training in an intermediate moment. Since the local data determines the local model training direction, our method can correct update direction of classifiers and effectively alleviate local update divergence. Besides, we propose feature alignment to enhance the training of the feature extractor. Experimental results demonstrate that FedCME performs better than FedAvg, FedProx, MOON and FedRS on popular federated learning benchmarks including FMNIST and CIFAR10, in the case where data are heterogeneous.
Chang models over derived models with supercompact measures ; Based on earlier work of the third author, we construct a Changtype model with supercompact measures extending a derived model of a given hod mouse with a regular cardinal delta that is both a limit of Woodin cardinals and a limit of deltastrong cardinals. The existence of such a hod mouse is consistent relative to a Woodin cardinal that is a limit of Woodin cardinals. We argue that our Changtype model satisfies mathsfAD mathsfADmathbbR Theta is regular omega1 is deltainftysupercompact for some regular cardinal deltainftyTheta. This complements Woodin's generalized Chang model, which satisfies mathsfAD mathsfADmathbbRomega1 is supercompact, assuming a proper class of Woodin cardinals that are limits of Woodin cardinals.
Reverse Knowledge Distillation Training a Large Model using a Small One for Retinal Image Matching on Limited Data ; Retinal image matching plays a crucial role in monitoring disease progression and treatment response. However, datasets with matched keypoints between temporally separated pairs of images are not available in abundance to train transformerbased model. We propose a novel approach based on reverse knowledge distillation to train large models with limited data while preventing overfitting. Firstly, we propose architectural modifications to a CNNbased semisupervised method called SuperRetina that help us improve its results on a publicly available dataset. Then, we train a computationally heavier model based on a vision transformer encoder using the lighter CNNbased model, which is counterintuitive in the field knowledgedistillation research where training lighter models based on heavier ones is the norm. Surprisingly, such reverse knowledge distillation improves generalization even further. Our experiments suggest that highdimensional fitting in representation space may prevent overfitting unlike training directly to match the final output. We also provide a public dataset with annotations for retinal image keypoint detection and matching to help the research community develop algorithms for retinal image applications.
On the ergodicity of a threefactor CIR model ; This work illustrates a trifactor model referred to as the CIR3 model, where both the trend and the volatility are stochastic and correlated. For the said model we prove that a pathwise unique global strong solution exists. We present a generalization of the Feller condition to ensure that each factor remains positive up to a Markov time determined by the preceding factors. The main results of the paper concern the Wasserstein ergodicity of the model, which cannot be obtained by means of the standard Dobrushin theorem, but requires some more sophisticated arguments involving topological aspects of Wasserstein spaces and Kolmogorov equations for measures. The same strategy can be then used to prove the Wasserstein ergodicity of the wellknown trifactor Chen model.
Scalable solution to crossed random effects model with random slopes ; The crossed random effects model is widely used, finding applications in various fields such as longitudinal studies, ecommerce, and recommender systems, among others. However, these models encounter scalability challenges, as the computational time for standard algorithms grows superlinearly with the number N of observations in the data set, commonly OmegaN32 or worse. Recent work has developed scalable methods for crossed random effects in linear models and some generalized linear models, but those works only allow for random intercepts. In this paper we devise scalable algorithms for models that include random slopes. This problem brings a substantial difficulty in estimating the random effect covariance matrices in a scalable way. We address that issue by using a variational EM algorithm. In simulations, we see that the proposed method is faster than standard methods. It is also more efficient than ordinary least squares which also has a problem of greatly underestimating the sampling uncertainty in parameter estimates. We illustrate the new method on a large dataset five million observations from the online retailer Stitch Fix.
DEPHN Different Expression Parallel Heterogeneous Network using virtual gradient optimization for Multitask Learning ; Recommendation system algorithm based on multitask learning MTL is the major method for Internet operators to understand users and predict their behaviors in the multibehavior scenario of platform. Task correlation is an important consideration of MTL goals, traditional models use sharedbottom models and gating experts to realize shared representation learning and information differentiation. However, The relationship between realworld tasks is often more complex than existing methods do not handle properly sharing information. In this paper, we propose an Different Expression Parallel Heterogeneous Network DEPHN to model multiple tasks simultaneously. DEPHN constructs the experts at the bottom of the model by using different feature interaction methods to improve the generalization ability of the shared information flow. In view of the model's differentiating ability for different task information flows, DEPHN uses feature explicit mapping and virtual gradient coefficient for expert gating during the training process, and adaptively adjusts the learning intensity of the gated unit by considering the difference of gating values and task correlation. Extensive experiments on artificial and realworld datasets demonstrate that our proposed method can capture task correlation in complex situations and achieve better performance than baseline modelsfootnoteAccepted in IJCNN2023.
Analyzing ChainofThought Prompting in Large Language Models via Gradientbased Feature Attributions ; Chainofthought CoT prompting has been shown to empirically improve the accuracy of large language models LLMs on various question answering tasks. While understanding why CoT prompting is effective is crucial to ensuring that this phenomenon is a consequence of desired model behavior, little work has addressed this; nonetheless, such an understanding is a critical prerequisite for responsible model deployment. We address this question by leveraging gradientbased feature attribution methods which produce saliency scores that capture the influence of input tokens on model output. Specifically, we probe several opensource LLMs to investigate whether CoT prompting affects the relative importances they assign to particular input tokens. Our results indicate that while CoT prompting does not increase the magnitude of saliency scores attributed to semantically relevant tokens in the prompt compared to standard fewshot prompting, it increases the robustness of saliency scores to question perturbations and variations in model output.
Domain preserving and strongly converging explicit scheme for the stochastic SIS epidemic model ; In this article, we construct a numerical method for a stochastic version of the Susceptible Infected Susceptible SIS epidemic model, expressed by a suitable stochastic differential equation SDE, by using the semidiscrete method to a suitable transformed process. We prove the strong convergence of the proposed method, with order 1, and examine its stability properties. Since SDEs generally lack analytical solutions, numerical techniques are commonly employed. Hence, the research will seek numerical solutions for existing stochastic models by constructing suitable numerical schemes and comparing them with other schemes. The objective is to achieve a qualitative and efficient approach to solving the equations. Additionally, for models that have not yet been proposed for stochastic modeling using SDEs, the research will formulate them appropriately, conduct theoretical analysis of the model properties, and subsequently solve the corresponding SDEs.
MiDaS v3.1 A Model Zoo for Robust Monocular Relative Depth Estimation ; We release MiDaS v3.1 for monocular depth estimation, offering a variety of new models based on different encoder backbones. This release is motivated by the success of transformers in computer vision, with a large variety of pretrained vision transformers now available. We explore how using the most promising vision transformers as image encoders impacts depth estimation quality and runtime of the MiDaS architecture. Our investigation also includes recent convolutional approaches that achieve comparable quality to vision transformers in image classification tasks. While the previous release MiDaS v3.0 solely leverages the vanilla vision transformer ViT, MiDaS v3.1 offers additional models based on BEiT, Swin, SwinV2, NextViT and LeViT. These models offer different performanceruntime tradeoffs. The best model improves the depth estimation quality by 28 while efficient models enable downstream tasks requiring high frame rates. We also describe the general process for integrating new backbones. A video summarizing the work can be found at httpsyoutu.beUjaeNNFf9sE and the code is available at httpsgithub.comislorgMiDaS.
Models of reference production How do they withstand the test of time ; In recent years, many NLP studies have focused solely on performance improvement. In this work, we focus on the linguistic and scientific aspects of NLP. We use the task of generating referring expressions in context REGincontext as a case study and start our analysis from GREC, a comprehensive set of shared tasks in English that addressed this topic over a decade ago. We ask what the performance of models would be if we assessed them 1 on more realistic datasets, and 2 using more advanced methods. We test the models using different evaluation metrics and feature selection experiments. We conclude that GREC can no longer be regarded as offering a reliable assessment of models' ability to mimic human reference production, because the results are highly impacted by the choice of corpus and evaluation metrics. Our results also suggest that pretrained language models are less dependent on the choice of corpus than classic Machine Learning models, and therefore make more robust class predictions.
Stratified Principal Component Analysis ; This paper investigates a general family of models that stratifies the space of covariance matrices by eigenvalue multiplicity. This family, coined Stratified Principal Component Analysis SPCA, includes in particular Probabilistic PCA PPCA models, where the noise component is assumed to be isotropic. We provide an explicit maximum likelihood and a geometric characterization relying on flag manifolds. A key outcome of this analysis is that PPCA's parsimony with respect to the full covariance model is due to the eigenvalueequality constraint in the noise space and the subsequent inference of a multidimensional eigenspace. The sequential nature of flag manifolds enables to extend this constraint to the signal space and bring more parsimonious models. Moreover, the stratification and the induced partial order on SPCA yield efficient model selection heuristics. Experiments on simulated and real datasets substantiate the interest of equalising adjacent sample eigenvalues when the gaps are small and the number of samples is limited. They notably demonstrate that SPCA models achieve a better complexitygoodnessoffit tradeoff than PPCA.
Does Full Waveform Inversion Benefit from Big Data ; This paper investigates the impact of big data on deep learning models for full waveform inversion FWI. While it is well known that big data can boost the performance of deep learning models in many tasks, its effectiveness has not been validated for FWI. To address this gap, we present an empirical study that investigates how deep learning models in FWI behave when trained on OpenFWI, a collection of largescale, multistructural datasets published recently. Particularly, we train and evaluate the FWI models on a combination of 10 2D subsets in OpenFWI that contain 470K data pairs in total. Our experiments demonstrate that larger datasets lead to better performance and generalization of deep learning models for FWI. We further demonstrate that model capacity needs to scale in accordance with data size for optimal improvement.
'What are you referring to' Evaluating the Ability of MultiModal Dialogue Models to Process Clarificational Exchanges ; Referential ambiguities arise in dialogue when a referring expression does not uniquely identify the intended referent for the addressee. Addressees usually detect such ambiguities immediately and work with the speaker to repair it using metacommunicative, Clarificational Exchanges CE a Clarification Request CR and a response. Here, we argue that the ability to generate and respond to CRs imposes specific constraints on the architecture and objective functions of multimodal, visually grounded dialogue models. We use the SIMMC 2.0 dataset to evaluate the ability of different stateoftheart model architectures to process CEs, with a metric that probes the contextual updates that arise from them in the model. We find that languagebased models are able to encode simple multimodal semantic information and process some CEs, excelling with those related to the dialogue history, whilst multimodal models can use additional learning objectives to obtain disentangled object representations, which become crucial to handle complex referential ambiguities across modalities overall.
Quantitative modeling and simulation of biochemical processes in the human body ; We present a wholebody model of human metabolism that utilizes a system of organs and blood vessels to simulate the enzymatic reactions. The model focuses on key organs, including the brain, heart and lungs, liver, gut, and kidney, as well as muscle and adipose tissue. The model equations are formulated using stoichiometry and MichaelisMenten kinetics to describe the enzymatic reactions. We demonstrate how the model can be used to simulate the effects of prolonged fasting and intermittent fasting on selected metabolite concentrations and glucose flux. Furthermore, by simulating intermittent fasting the effect on the carbohydrate, the protein and the lipid storage is examined. We propose this method as a simple and intuitive approach for modeling the human metabolism, which is general, systematic and easy to incorporate. This could have potential applications in PKPD drug development and in understanding metabolic disorders.
LLMs Understand GlassBox Models, Discover Surprises, and Suggest Repairs ; We show that large language models LLMs are remarkably good at working with interpretable models that decompose complex outcomes into univariate graphrepresented components. By adopting a hierarchical approach to reasoning, LLMs can provide comprehensive modellevel summaries without ever requiring the entire model to fit in context. This approach enables LLMs to apply their extensive background knowledge to automate common tasks in data science such as detecting anomalies that contradict prior knowledge, describing potential reasons for the anomalies, and suggesting repairs that would remove the anomalies. We use multiple examples in healthcare to demonstrate the utility of these new capabilities of LLMs, with particular emphasis on Generalized Additive Models GAMs. Finally, we present the package textttTalkToEBM as an opensource LLMGAM interface.
CausalOps Towards an Industrial Lifecycle for Causal Probabilistic Graphical Models ; Causal probabilistic graphbased models have gained widespread utility, enabling the modeling of causeandeffect relationships across diverse domains. With their rising adoption in new areas, such as automotive system safety and machine learning, the need for an integrated lifecycle framework akin to DevOps and MLOps has emerged. Currently, a process reference for organizations interested in employing causal engineering is missing. To address this gap and foster widespread industrial adoption, we propose CausalOps, a novel lifecycle framework for causal model development and application. By defining key entities, dependencies, and intermediate artifacts generated during causal engineering, we establish a consistent vocabulary and workflow model. This work contextualizes causal model usage across different stages and stakeholders, outlining a holistic view of creating and maintaining them. CausalOps' aim is to drive the adoption of causal methods in practical applications within interested organizations and the causality community.
Relational hyperevent models for the coevolution of coauthoring and citation networks ; Interest in the network analysis of bibliographic data has increased significantly in recent years. Yet, appropriate statistical models for examining the full dynamics of scientific citation networks, connecting authors to the papers they write and papers to other papers they cite, are not available. Very few studies exist that have examined how the social network between coauthors and the citation network among the papers shape one another and coevolve. In consequence, our understanding of scientific citation networks remains incomplete. In this paper we extend recently derived relational hyperevent models RHEM to the analysis of scientific networks, providing a general framework to model the multiple dependencies involved in the relation linking multiple authors to the papers they write, and papers to the multiple references they cite. We demonstrate the empirical value of our model in an analysis of publicly available data on a scientific network comprising millions of authors and papers and assess the relative strength of various effects explaining scientific production. We outline the implications of the model for the evaluation of scientific research.
Local Large Language Models for Complex Structured Medical Tasks ; This paper introduces an approach that combines the language reasoning capabilities of large language models LLMs with the benefits of local training to tackle complex, domainspecific tasks. Specifically, the authors demonstrate their approach by extracting structured condition codes from pathology reports. The proposed approach utilizes local LLMs, which can be finetuned to respond to specific generative instructions and provide structured outputs. The authors collected a dataset of over 150k uncurated surgical pathology reports, containing gross descriptions, final diagnoses, and condition codes. They trained different model architectures, including LLaMA, BERT and LongFormer and evaluated their performance. The results show that the LLaMAbased models significantly outperform BERTstyle models across all evaluated metrics, even with extremely reduced precision. The LLaMA models performed especially well with large datasets, demonstrating their ability to handle complex, multilabel tasks. Overall, this work presents an effective approach for utilizing LLMs to perform domainspecific tasks using accessible hardware, with potential applications in the medical domain, where complex data extraction and classification are required.
A Multidimensional Analysis of Social Biases in Vision Transformers ; The embedding spaces of image models have been shown to encode a range of social biases such as racism and sexism. Here, we investigate specific factors that contribute to the emergence of these biases in Vision Transformers ViT. Therefore, we measure the impact of training data, model architecture, and training objectives on social biases in the learned representations of ViTs. Our findings indicate that counterfactual augmentation training using diffusionbased image editing can mitigate biases, but does not eliminate them. Moreover, we find that larger models are less biased than smaller models, and that models trained using discriminative objectives are less biased than those trained using generative objectives. In addition, we observe inconsistencies in the learned social biases. To our surprise, ViTs can exhibit opposite biases when trained on the same data set using different selfsupervised objectives. Our findings give insights into the factors that contribute to the emergence of social biases and suggests that we could achieve substantial fairness improvements based on model design choices.
Efficient Sentiment Analysis A ResourceAware Evaluation of Feature Extraction Techniques, Ensembling, and Deep Learning Models ; While reaching for NLP systems that maximize accuracy, other important metrics of system performance are often overlooked. Prior models are easily forgotten despite their possible suitability in settings where large computing resources are unavailable or relatively more costly. In this paper, we perform a broad comparative evaluation of documentlevel sentiment analysis models with a focus on resource costs that are important for the feasibility of model deployment and general climate consciousness. Our experiments consider different feature extraction techniques, the effect of ensembling, taskspecific deep learning modeling, and domainindependent large language models LLMs. We find that while a finetuned LLM achieves the best accuracy, some alternate configurations provide huge up to 24, 283 resource savings for a marginal 1 loss in accuracy. Furthermore, we find that for smaller datasets, the differences in accuracy shrink while the difference in resource consumption grows further.
A StateSpace Perspective on Modelling and Inference for Online Skill Rating ; This paper offers a comprehensive review of the main methodologies used for skill rating in competitive sports. We advocate for a statespace model perspective, wherein players' skills are represented as timevarying, and match results serve as the sole observed quantities. The statespace model perspective facilitates the decoupling of modeling and inference, enabling a more focused approach highlighting model assumptions, while also fostering the development of generalpurpose inference tools. We explore the essential steps involved in constructing a statespace model for skill rating before turning to a discussion on the three stages of inference filtering, smoothing and parameter estimation. Throughout, we examine the computational challenges of scaling up to highdimensional scenarios involving numerous players and matches, highlighting approximations and reductions used to address these challenges effectively. We provide concise summaries of popular methods documented in the literature, along with their inferential paradigms and introduce new approaches to skill rating inference based on sequential Monte Carlo and finite statespaces. We close with numerical experiments demonstrating a practical workflow on real data across different sports.
Sea level Projections with Machine Learning using Altimetry and Climate Model ensembles ; Satellite altimeter observations retrieved since 1993 show that the global mean sea level is rising at an unprecedented rate 3.4mmyear. With almost three decades of observations, we can now investigate the contributions of anthropogenic climatechange signals such as greenhouse gases, aerosols, and biomass burning in this rising sea level. We use machine learning ML to investigate future patterns of sea level change. To understand the extent of contributions from the climatechange signals, and to help in forecasting sea level change in the future, we turn to climate model simulations. This work presents a machine learning framework that exploits both satellite observations and climate model simulations to generate sea level rise projections at a 2degree resolution spatial grid, 30 years into the future. We train fully connected neural networks FCNNs to predict altimeter values through a nonlinear fusion of the climate model hindcasts for 19932019. The learned FCNNs are then applied to future climate model projections to predict future sea level patterns. We propose segmenting our spatial dataset into meaningful clusters and show that clustering helps to improve predictions of our ML model.
Language models as master equation solvers ; Master equations are of fundamental importance in modeling stochastic dynamical systems.However, solving master equations is challenging due to the exponential increase in the number of possible states or trajectories with the dimension of the state space. In this study, we propose repurposing language models as a machine learning approach to solve master equations. We design a promptbased neural network to map rate parameters, initial conditions, and time values directly to the state joint probability distribution that exactly matches the input contexts. In this way, we approximate the solution of the master equation in its most general form. We train the network using the policy gradient algorithm within the reinforcement learning framework, with feedback rewards provided by a set of variational autoregressive models. By applying this approach to representative examples, we observe high accuracy for both multimodule and highdimensional systems. The trained network also exhibits extrapolating ability, extending its predictability to unseen data. Our findings establish the connection between language models and master equations, highlighting the possibility of using a single pretrained large model to solve any master equation.
ConDistFL Conditional Distillation for Federated Learning from Partially Annotated Data ; Developing a generalized segmentation model capable of simultaneously delineating multiple organs and diseases is highly desirable. Federated learning FL is a key technology enabling the collaborative development of a model without exchanging training data. However, the limited access to fully annotated training data poses a major challenge to training generalizable models. We propose ConDistFL, a framework to solve this problem by combining FL with knowledge distillation. Local models can extract the knowledge of unlabeled organs and tumors from partially annotated data from the global model with an adequately designed conditional probability representation. We validate our framework on four distinct partially annotated abdominal CT datasets from the MSD and KiTS19 challenges. The experimental results show that the proposed framework significantly outperforms FedAvg and FedOpt baselines. Moreover, the performance on an external test dataset demonstrates superior generalizability compared to models trained on each dataset separately. Our ablation study suggests that ConDistFL can perform well without frequent aggregation, reducing the communication cost of FL. Our implementation will be available at httpsgithub.comNVIDIANVFlaretreedevresearchcondistfl.
Pelta Shielding Transformers to Mitigate Evasion Attacks in Federated Learning ; The main premise of federated learning is that machine learning model updates are computed locally, in particular to preserve user data privacy, as those never leave the perimeter of their device. This mechanism supposes the general model, once aggregated, to be broadcast to collaborating and non malicious nodes. However, without proper defenses, compromised clients can easily probe the model inside their local memory in search of adversarial examples. For instance, considering imagebased applications, adversarial examples consist of imperceptibly perturbed images to the human eye misclassified by the local model, which can be later presented to a victim node's counterpart model to replicate the attack. To mitigate such malicious probing, we introduce Pelta, a novel shielding mechanism leveraging trusted hardware. By harnessing the capabilities of Trusted Execution Environments TEEs, Pelta masks part of the backpropagation chain rule, otherwise typically exploited by attackers for the design of malicious samples. We evaluate Pelta on a state of the art ensemble model and demonstrate its effectiveness against the Self Attention Gradient adversarial Attack.
Deep PlugandPlay Prior for Massive MIMO Systems ; Scalability is a major concern in implementing deep learning DL based methods in wireless communication systems. Given various communication tasks, applying one DL model for one specific task is costly in both model training and model storage. In this paper, we propose a novel deep plugandplay prior method for three communication tasks in the downlink of massive multipleinput multipleoutput MIMO systems, including channel estimation, antenna extrapolation and channel state information CSI feedback. The proposed method corresponding to these three communication tasks employs a common DL model, which greatly reduces the overhead of model training and storage. Unlike general multitask learning, the DL model of the proposed method does not require further finetuning for specific communication tasks, but is plugandplay. Extensive experiments are conducted on the DeepMIMO dataset to demonstrate the convergence, performance, and storage overhead of the proposed method for the three communication tasks.
3D Modeling of a Guitar Using a Computer Tomography Scan ; This paper describes the development of a detailed 3D geometric model of an acoustical guitar. Modeling an instrument is a sophisticated task considering the individual parts and their complex shapes. The geometry of the parts visible from the outside can be measured using appropriate tools, but it is very difficult to measure the details of the internal parts like bracing, heels, and other features by hand through the sound hole. Otherwise, it would be necessary to disassemble the guitar to measure the precise position and dimensions of the parts inside it. Reassembling the guitar could result in improper functioning. To avoid damaging the instrument by disassembling or taking inaccurate measurements through the sound hole, a computer tomography CT scan of the guitar body was performed. Using this method, crosssectional images of the guitar body in all the three dimensions were extracted with 1 mm spacing between adjacent images. In total, approximately 2000 images were generated and used in developing the geometric model of the guitar. The 3D model will be further used to develop a vibroacoustic simulation model of the guitar
SelfAlignment with Instruction Backtranslation ; We present a scalable method to build a high quality instruction following language model by automatically labelling humanwritten text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents selfaugmentation, and then selecting high quality examples from among these candidates selfcuration. This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMabased models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective selfalignment.
ModelScope TexttoVideo Technical Report ; This paper introduces ModelScopeT2V, a texttovideo synthesis model that evolves from a texttoimage synthesis model i.e., Stable Diffusion. ModelScopeT2V incorporates spatiotemporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both imagetext and videotext datasets. ModelScopeT2V brings together three components i.e., VQGAN, a text encoder, and a denoising UNet, totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over stateoftheart methods across three evaluation metrics. The code and an online demo are available at urlhttpsmodelscope.cnmodelsdamotexttovideosynthesissummary.
DFMX Augmentation by Leveraging Prior Knowledge of Shortcut Learning ; Neural networks are prone to learn easy solutions from superficial statistics in the data, namely shortcut learning, which impairs generalization and robustness of models. We propose a data augmentation strategy, named DFMX, that leverages knowledge about frequency shortcuts, encoded in Dominant Frequencies Maps computed for image classification models. We randomly select X training images of certain classes for augmentation, and process them by retaining the frequencies included in the DFMs of other classes. This strategy compels the models to leverage a broader range of frequencies for classification, rather than relying on specific frequency sets. Thus, the models learn more deep and taskrelated semantics compared to their counterpart trained with standard setups. Unlike other commonly used augmentation techniques which focus on increasing the visual variations of training data, our method targets exploiting the original data efficiently, by distilling prior knowledge about destructive learning behavior of models from data. Our experimental results demonstrate that DFMX improves robustness against common corruptions and adversarial attacks. It can be seamlessly integrated with other augmentation techniques to further enhance the robustness of models.
DataDriven Allocation of Preventive Care With Application to Diabetes Mellitus Type II ; Problem Definition. Increasing costs of healthcare highlight the importance of effective disease prevention. However, decision models for allocating preventive care are lacking. MethodologyResults. In this paper, we develop a datadriven decision model for determining a costeffective allocation of preventive treatments to patients at risk. Specifically, we combine counterfactual inference, machine learning, and optimization techniques to build a scalable decision model that can exploit highdimensional medical data, such as the data found in modern electronic health records. Our decision model is evaluated based on electronic health records from 89,191 prediabetic patients. We compare the allocation of preventive treatments metformin prescribed by our datadriven decision model with that of current practice. We find that if our approach is applied to the U.S. population, it can yield annual savings of 1.1 billion. Finally, we analyze the costeffectiveness under varying budget levels. Managerial Implications. Our work supports decisionmaking in health management, with the goal of achieving effective disease prevention at lower costs. Importantly, our decision model is generic and can thus be used for effective allocation of preventive care for other preventable diseases.
Nonnegative matrix factorization for coherent set identification by direct low rank maximum likelihood estimation ; We analyze connections between two low rank modeling approaches from the last decade for treating dynamical data. The first one is the coherence problem or coherent set approach, where groups of states are sought that evolve under the action of a stochastic matrix in a way maximally distinguishable from other groups. The second one is a low rank factorization approach for stochastic matrices, called Direct Bayesian Model Reduction DBMR, which estimates the low rank factors directly from observed data. We show that DBMR results in a low rank model that is a projection of the full model, and exploit this insight to infer bounds on a quantitative measure of coherence within the reduced model. Both approaches can be formulated as optimization problems, and we also prove a bound between their respective objectives. On a broader scope, this work relates the two classical loss functions of nonnegative matrix factorization, namely the Frobenius norm and the generalized KullbackLeibler divergence, and suggests new links between likelihoodbased and projectionbased estimation of probabilistic models.
Spontaneous Supersymmetry Breaking in Inhomogeneous Supersymmetric Field Theories and BPS Vacua ; We study spontaneous supersymmetry breaking in inhomogeneous extensions of cal N1 supersymmetric field theory models in 4dimensions. The cal N1 Abelian Higgs model with the inhomogeneous mass parameter and the FI coefficient that are dependent on spatial coordinates, as well as the O'Raifeartaigh model with all its parameters being dependent on spatial coordinates, are studied in detail. In the presence of inhomogeneous parameters, half supersymmetry can be preserved by adding appropriate inhomogeneous deformations to the original Lagrangians. The inhomogeneous deformations either break the Rsymmetry or produce a model that lacks spontaneously broken Rsymmetry. We argue that those can not be spontaneous supersymmetry breaking models, according to the NelsonSeiberg argument. We comment on this issue in the context of a generic cal N1 supersymmetric model as well.
Accelerated materials language processing enabled by GPT ; Materials language processing MLP is one of the key facilitators of materials science research, as it enables the extraction of structured information from massive materials science literature. Prior works suggested highperformance MLP models for text classification, named entity recognition NER, and extractive question answering QA, which require complex model architecture, exhaustive finetuning and a large number of humanlabelled datasets. In this study, we develop generative pretrained transformer GPTenabled pipelines where the complex architectures of prior MLP models are replaced with strategic designs of prompt engineering. First, we develop a GPTenabled document classification method for screening relevant documents, achieving comparable accuracy and reliability compared to prior models, with only small dataset. Secondly, for NER task, we design an entitycentric prompts, and learning fewshot of them improved the performance on most of entities in three open datasets. Finally, we develop an GPTenabled extractive QA model, which provides improved performance and shows the possibility of automatically correcting annotations. While our findings confirm the potential of GPTenabled MLP models as well as their value in terms of reliability and practicability, our scientific methods and systematic approach are applicable to any materials science domain to accelerate the information extraction of scientific literature.
DUAW Datafree Universal Adversarial Watermark against Stable Diffusion Customization ; Stable Diffusion SD customization approaches enable users to personalize SD model outputs, greatly enhancing the flexibility and diversity of AI art. However, they also allow individuals to plagiarize specific styles or subjects from copyrighted images, which raises significant concerns about potential copyright infringement. To address this issue, we propose an invisible datafree universal adversarial watermark DUAW, aiming to protect a myriad of copyrighted images from different customization approaches across various versions of SD models. First, DUAW is designed to disrupt the variational autoencoder during SD customization. Second, DUAW operates in a datafree context, where it is trained on synthetic images produced by a Large Language Model LLM and a pretrained SD model. This approach circumvents the necessity of directly handling copyrighted images, thereby preserving their confidentiality. Once crafted, DUAW can be imperceptibly integrated into massive copyrighted images, serving as a protective measure by inducing significant distortions in the images generated by customized SD models. Experimental results demonstrate that DUAW can effectively distort the outputs of finetuned SD models, rendering them discernible to both human observers and a simple classifier.
MultiGradSpeech Towards Diffusionbased MultiSpeaker Texttospeech Using Consistent Diffusion Models ; Despite imperfect scorematching causing drift in training and sampling distributions of diffusion models, recent advances in diffusionbased acoustic models have revolutionized datasufficient singlespeaker TexttoSpeech TTS approaches, with GradTTS being a prime example. However, the sampling drift problem leads to these approaches struggling in multispeaker scenarios in practice due to more complex target data distribution compared to singlespeaker scenarios. In this paper, we present MultiGradSpeech, a multispeaker diffusionbased acoustic models which introduces the Consistent Diffusion Model CDM as a generative modeling approach. We enforce the consistency property of CDM during the training process to alleviate the sampling drift problem in the inference stage, resulting in significant improvements in multispeaker TTS performance. Our experimental results corroborate that our proposed approach can improve the performance of different speakers involved in multispeaker TTS compared to GradTTS, even outperforming the finetuning approach. Audio samples are available at httpswelkinyang.github.iomultigradspeech
Enhancing Adversarial Attacks The Similar Target Method ; Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns. An intriguing property of adversarial examples is their strong transferability. Several methods have been proposed to enhance transferability, including ensemble attacks which have demonstrated their efficacy. However, prior approaches simply average logits, probabilities, or losses for model ensembling, lacking a comprehensive analysis of how and why model ensembling significantly improves transferability. In this paper, we propose a similar targeted attack method named Similar TargetST. By promoting cosine similarity between the gradients of each model, our method regularizes the optimization direction to simultaneously attack all surrogate models. This strategy has been proven to enhance generalization ability. Experimental results on ImageNet validate the effectiveness of our approach in improving adversarial transferability. Our method outperforms stateoftheart attackers on 18 discriminative classifiers and adversarially trained models.
Toward Extending Concentric Tube Robot Kinematics for Large Clearance and Impulse Curvature ; Concentric Tube Robots CTRs have been proposed to operate within the unstructured environment for minimally invasive surgeries. In this letter, we consider the operation scenario where the tubes travel inside the channels with a large clearance or large curvature, such as aortas or industrial pipes. Accurate kinematic modeling of CTRs is required for the development of advanced control and sensing algorithms. To this end, we extended the conventional CTR kinematics model to a more general case with large tubetotube clearance and large centerline curvature. Numerical simulations and experimental validations are conducted to compare our model with respect to the conventional CTR kinematic model. In the physical experiments, our proposed model achieved a tip position error of 1.53 mm in the 2D planer case and 4.36 mm in 3D case, outperforming the stateoftheart model by 71 and 66, respectively.
Using language models in the implicit automated assessment of mathematical short answer items ; We propose a new way to assess certain short constructed responses to mathematics items. Our approach uses a pipeline that identifies the key values specified by the student in their response. This allows us to determine the correctness of the response, as well as identify any misconceptions. The information from the value identification pipeline can then be used to provide feedback to the teacher and student. The value identification pipeline consists of two finetuned language models. The first model determines if a value is implicit in the student response. The second model identifies where in the response the key value is specified. We consider both a generic model that can be used for any prompt and value, as well as models that are specific to each prompt and value. The value identification pipeline is a more accurate and informative way to assess short constructed responses than traditional rubricbased scoring. It can be used to provide more targeted feedback to students, which can help them improve their understanding of mathematics.
Constrained cosmological model in fQ,T gravity with nonlinear nonmetricity ; The fQ,T cosmological model has emerged as a promising framework for understanding various aspects of cosmic evolution. In this study, we focused on obtaining the constraints of the free parameters in the nonlinear form of nonmetricity in fQ,T gravity using the Hubble, Pantheon, and BAO datasets. To determine the bestfit values for the model parameters and the equation of state EoS parameter, we employed an MCMC analysis. By examining the error bar plots, we observed that both the model curve and the LambdaCDM curve successfully passed through the range obtained from the datasets. Additionally, we studied the state finder diagnostics and energy conditions to gain insights into the properties of the model. Furthermore, we conducted an analysis using the Omz diagnostic, which provides a null test for the validity of the LambdaCDM model.
Overcoming General Knowledge Loss with Selective Parameter Finetuning ; Foundation models encompass an extensive knowledge base and offer remarkable transferability. However, this knowledge becomes outdated or insufficient over time. The challenge lies in updating foundation models to accommodate novel information while retaining their original ability. In this paper, we present a novel approach to achieving continual model updates by effecting localized modifications to a small subset of parameters. Guided by insights gleaned from prior analyses of foundational models, we first localize a specific layer for model refinement and then introduce an importance scoring mechanism designed to update only the most crucial weights. Our method is exhaustively evaluated on foundational visionlanguage models, measuring its efficacy in both learning new information and preserving preestablished knowledge across a diverse spectrum of continual learning tasks, including Aircraft, Birdsnap CIFAR100, CUB, Cars, and GTSRB. The results show that our method improves the existing continual learning methods by 0.5 10 on average, and reduces the loss of pretrained knowledge from around 5 to 0.97. Comprehensive ablation studies substantiate our method design, shedding light on the contributions of each component to controllably learning new knowledge and mitigating the forgetting of pretrained knowledge.
TagBased Annotation for Avatar Face Creation ; Currently, digital avatars can be created manually using human images as reference. Systems such as Bitmoji are excellent producers of detailed avatar designs, with hundreds of choices for customization. A supervised learning model could be trained to generate avatars automatically, but the hundreds of possible options create difficulty in securing nonnoisy data to train a model. As a solution, we train a model to produce avatars from human images using tagbased annotations. This method provides better annotator agreement, leading to less noisy data and higher quality model predictions. Our contribution is an application of tagbased annotation to train a model for avatar face creation. We design tags for 3 different facial facial features offered by Bitmoji, and train a model using tagbased annotation to predict the nose.
On the radial growth of ballistic aggregation and other aggregation models ; For a class of aggregation models on the integer lattice mathbbZd, dgeq 2, in which clusters are formed by particles arriving one after the other and sticking irreversibly where they first hit the cluster, including the classical model of diffusionlimited aggregation DLA, we study the growth of the clusters. We observe that a method of Kesten used to obtain an almost sure upper bound on the radial growth in the DLA model generalizes to a large class of such models. We use it in particular to prove such a bound for the socalled ballistic model, in which the arriving particles travel along straight lines. Our bound implies that the fractal dimension of ballistic aggregation clusters in mathbbZ2 is 2, which proves a long standing conjecture in the physics literature.
Prompting VisualLanguage Models for Dynamic Facial Expression Recognition ; This paper presents a novel visuallanguage model called DFERCLIP, which is based on the CLIP model and designed for inthewild Dynamic Facial Expression Recognition DFER. Specifically, the proposed DFERCLIP consists of a visual part and a textual part. For the visual part, based on the CLIP image encoder, a temporal model consisting of several Transformer encoders is introduced for extracting temporal facial expression features, and the final feature embedding is obtained as a learnable class token. For the textual part, we use as inputs textual descriptions of the facial behaviour that is related to the classes facial expressions that we are interested in recognising those descriptions are generated using large language models, like ChatGPT. This, in contrast to works that use only the class names and more accurately captures the relationship between them. Alongside the textual description, we introduce a learnable token which helps the model learn relevant context information for each expression during training. Extensive experiments demonstrate the effectiveness of the proposed method and show that our DFERCLIP also achieves stateoftheart results compared with the current supervised DFER methods on the DFEW, FERV39k, and MAFW benchmarks. Code is publicly available at httpsgithub.comzengqunzhaoDFERCLIP.
Interstellar radiation as a Maxwell field improved numerical scheme and application to the spectral energy density ; The existing models of the interstellar radiation field ISRF do not produce a Maxwell field. Here, the recent model of the ISRF as a Maxwell field is improved by considering separately the different frequencies at the stage of the fitting. Using this improved procedure i It is checked in detail that the model does predict extremely high values of the spectral energy density SED on the axis of a galaxy, that however decrease very rapidly when rho , the distance to the axis, is increased from zero. ii The difference between the SED values with rho 1,kpc or 8,kpc, as predicted either by this model or by a recent radiation transfer model, is reduced significantly. iii The slower decrease of the SED with increasing altitude z, as compared with the radiation transfer model, is confirmed. We also calculate the evolutions of the SED at large rho . We interpret these evolutions by determining asymptotic expansions of the SED at large z, and also ones at large rho .
The simpliciality of higherorder networks ; Higherorder networks are widely used to describe complex systems in which interactions can involve more than two entities at once. In this paper, we focus on inclusion within higherorder networks, referring to situations where specific entities participate in an interaction, and subsets of those entities also interact with each other. Traditional modeling approaches to higherorder networks tend to either not consider inclusion at all e.g., hypergraph models or explicitly assume perfect and complete inclusion e.g., simplicial complex models. To allow for a more nuanced assessment of inclusion in higherorder networks, we introduce the concept of simpliciality and several corresponding measures. Contrary to current modeling practice, we show that empirically observed systems rarely lie at either end of the simpliciality spectrum. In addition, we show that generative models fitted to these datasets struggle to capture their inclusion structure. These findings suggest new modeling directions for the field of higherorder network science.
STEC SeeThrough Transformerbased Encoder for CTR Prediction ; ClickThrough Rate CTR prediction holds a pivotal place in online advertising and recommender systems since CTR prediction performance directly influences the overall satisfaction of the users and the revenue generated by companies. Even so, CTR prediction is still an active area of research since it involves accurately modelling the preferences of users based on sparse and highdimensional features where the higherorder interactions of multiple features can lead to different outcomes. Most CTR prediction models have relied on a single fusion and interaction learning strategy. The few CTR prediction models that have utilized multiple interaction modelling strategies have treated each interaction to be selfcontained. In this paper, we propose a novel model named STEC that reaps the benefits of multiple interaction learning approaches in a single unified architecture. Additionally, our model introduces residual connections from different orders of interactions which boosts the performance by allowing lower level interactions to directly affect the predictions. Through extensive experiments on four realworld datasets, we demonstrate that STEC outperforms existing stateoftheart approaches for CTR prediction thanks to its greater expressive capabilities.
Shape of my heart Cardiac models through learned signed distance functions ; The efficient construction of an anatomical model is one of the major challenges of patientspecific insilico models of the human heart. Current methods frequently rely on linear statistical models, allowing no advanced topological changes, or requiring medical image segmentation followed by a meshing pipeline, which strongly depends on image resolution, quality, and modality. These approaches are therefore limited in their transferability to other imaging domains. In this work, the cardiac shape is reconstructed by means of threedimensional deep signed distance functions with Lipschitz regularity. For this purpose, the shapes of cardiac MRI reconstructions are learned from public databases to model the spatial relation of multiple chambers in Cartesian space. We demonstrate that this approach is also capable of reconstructing anatomical models from partial data, such as point clouds from a single ventricle, or modalities different from the trained MRI, such as electroanatomical mapping, and in addition, allows us to generate new anatomical shapes by randomly sampling latent vectors.
Bethe ansatz inside CalogeroSutherland models ; We study the trigonometric quantum spinCalogeroSutherland model, and the HaldaneShastry spin chain as a special case, using a Betheansatz analysis. We harness the model's Yangian symmetry to import the standard tools of integrability for Heisenberg spin chains into the world of integrable longrange models with spins. From the transfer matrix with a diagonal twist we construct Heisenbergstyle symmetries Bethe algebra that refine the usual hierarchy of commuting Hamiltonians quantum determinant of the spinCalogeroSutherland model. We compute the first few of these new conserved charges explicitly, and diagonalise them by Bethe ansatz inside each irreducible Yangian representation. This yields a new eigenbasis for the spinCalogeroSutherland model that generalises the Yangian GelfandTsetlin basis of Takemura and Uglov. The Betheansatz analysis involves nongeneric values of the inhomogeneities. Our review of the inhomogeneous Heisenberg XXX chain, with special attention to how the Bethe ansatz works in the presence of fusion, may be of independent interest.
A Naturalness motivated Top Yukawa Model for the Composite Higgs ; The top quark leads to the dominant quantum correction to the Higgs quadratic term, which is usually canceled by top partners in traditional symmetrybased models. However, the absence of light top partners starts challenging the Naturalness of these models. In this paper, we study a model based on composite Higss with the top Yukawa coupling originating from dim6 fourfermion operators. The low cutoff scale of the top quark loop required by the Naturalness principle can be realized with a light gauge boson Emu which connects the hyperfermions and top quarks. A scalarless dynamical model with weakly coupled extended SU4EC group is presented. The model features a light Emu gauge boson and a thirdgenerationphilic Z'E boson, which leads to a rich phenomenology, especially on the top quark physics.
Vibration spectra of benzenelike models with Hooke's law interactions ; The harmonic oscillations of a springball model of benzenelike nanosystems with Hooke's law interactions between nearest, second, and third neighbors are explored. We show that in the cylindrical coordinates the dynamics of this cyclic hexagonal system is described by the Lagrange equations similar to those of the onedimensional twocomponent crystal model. We expose that the vibration frequencies of the hexagonal model lie on the branches of the dispersion law of the associated lattice model, and their positions are determined by the cyclic BornVon Karman condition. The hexagonal model is generalized to one describing the benzene molecule and the fully deuterated and halogenated benzenes. The effect of hybridization of vibration modes and the pushing apart of spectral branches in the crossover situation is revealed. All the discrete frequency spectrum and normal modes of oscillations and their explicit dependencies on all the constants of elastic interactions are exactly found.
On the Planning, Search, and Memorization Capabilities of Large Language Models ; The rapid advancement of large language models, such as the Generative Pretrained Transformer GPT series, has had significant implications across various disciplines. In this study, we investigate the potential of the stateoftheart large language model GPT4 for planning tasks. We explore its effectiveness in multiple planning subfields, highlighting both its strengths and limitations. Through a comprehensive examination, we identify areas where large language models excel in solving planning problems and reveal the constraints that limit their applicability. Our empirical analysis focuses on GPT4's performance in planning domain extraction, graph search path planning, and adversarial planning. We then propose a way of finetuning a domainspecific large language model to improve its Chain of Thought CoT capabilities for the abovementioned tasks. The results provide valuable insights into the potential applications of large language models in the planning domain and pave the way for future research to overcome their limitations and expand their capabilities.
Charged Anisotropic Tolman IV Solution in MatterGeometry Coupled Theory ; This paper discusses the interior distribution of several anisotropic star models coupled with an electromagnetic field in the context of fmathcalR,mathcalT,mathcalQ gravity, where mathcalQmathcalRbetaximathcalTbetaxi. In this regard, a standard model of this modified gravity is taken as mathcalRnu3mathcalRbetaximathcalTbetaxi, where nu3 symbolizes an arbitrary coupling constant. We assume a charged spherically symmetric metric that represents the interior geometry of compact quark stars and develop the corresponding modified field equations. These equations are then solved with the help of metric potentials of Tolman IV spacetime and a linear bag model equation of state. We consider the experimental data i.e., radii and masses of different quark models such as SMC X4, SAX J 1808.43658, Her XI and 4U 182030 to analyze how the charge and modified corrections affect their physical characteristics. The viability and stability of the resulting model is also checked for the considered star candidates with two different values of nu3. We conclude that only two models, Her XI and 4U 182030 show stable behavior in this modified framework for both values of the coupling constant.
Beyond Ninfty in Large N Conformal Vector Models at Finite Temperature ; We investigate finitetemperature observables in threedimensional large N critical vector models taking into account the effects suppressed by 1over N. Such subleading contributions are captured by the fluctuations of the HubbardStratonovich auxiliary field which need to be handled with care due to a subtle divergence structure which we clarify. The examples we consider include the scalar ON model, the GrossNeveu model, the NambuJonaLasinio model and the massless ChernSimons Quantum Electrodynamics. We present explicit results for the free energy density to the subleading order, which also captures the onepoint function of the stressenergy tensor, and include the dependence on a chemical potential. We further provide a formula from diagrammatics for the onepoint functions of general singletrace higherspin currents. We observe that in most cases considered, these subleading effects lift the apparent degeneracies between observables in different models at infinite N, while in special cases the discrepancies only start to appear at the nexttosubleading order.
NeuralHiddenCRF A Robust WeaklySupervised Sequence Labeler ; We propose a neuralized undirected graphical model called NeuralHiddenCRF to solve the weaklysupervised sequence labeling problem. Under the umbrella of probabilistic undirected graph theory, the proposed NeuralHiddenCRF embedded with a hidden CRF layer models the variables of word sequence, latent ground truth sequence, and weak label sequence with the global perspective that undirected graphical models particularly enjoy. In NeuralHiddenCRF, we can capitalize on the powerful language model BERT or other deep models to provide rich contextual semantic knowledge to the latent ground truth sequence, and use the hidden CRF layer to capture the internal label dependencies. NeuralHiddenCRF is conceptually simple and empirically powerful. It obtains new stateoftheart results on one crowdsourcing benchmark and three weaksupervision benchmarks, including outperforming the recent advanced model CHMM by 2.80 F1 points and 2.23 F1 points in average generalization and inference performance, respectively.
Noisy DemkovKunike model ; The DemkovKunike DK model, in which the Rabi coupling and the onsite detuning depend on time as JtextsechtT and Delta0Delta1tanhtT respectively, provides one of the most general forms of an exactly solvable twostate quantum model. Thus it offers a paradigm for studying the coherent manipulations of the quantum state of a qubit. However, the exploration of the noisy DK model is still lacking. Here, we study the DK model with Jrightarrow Jtextnoisyt in the presence of colored Markovian noise sources, as exemplified by the telegraph noise and Gaussian noise. We analytically obtain the exact solutions for the survival probability QnoisyDK of finding the system remained in the initial state. For the fast telegraph noise, surprisingly, we find parameter regimes where the QnoisyDK is suppressed rather than being enhanced by noise, which can be understood through the quantum Zeno effect. For the slow Gaussian noise, we find the noise always leads to an enhanced QnoisyDK, due to the absorption of the noise quanta across the gap. Our work complements the studies of the noisy LandauZener model. It also offers a new perspective for the control of twolevel quantum systems.
Benchmarking Procedural Language Understanding for LowResource Languages A Case Study on Turkish ; Understanding procedural natural language e.g., stepbystep instructions is a crucial step to execution and planning. However, while there are ample corpora and downstream tasks available in English, the field lacks such resources for most languages. To address this gap, we conduct a case study on Turkish procedural texts. We first expand the number of tutorials in Turkish wikiHow from 2,000 to 52,000 using automated translation tools, where the translation quality and loyalty to the original meaning are validated by a team of experts on a random set. Then, we generate several downstream tasks on the corpus, such as linking actions, goal inference, and summarization. To tackle these tasks, we implement strong baseline models via finetuning large languagespecific models such as TRBART and BERTurk, as well as multilingual models such as mBART, mT5, and XLM. We find that languagespecific models consistently outperform their multilingual models by a significant margin across most procedural language understanding PLU tasks. We release our corpus, downstream tasks and the baseline models with httpsgithub.com GGLABKUturkishplu.
When Geoscience Meets Foundation Models Towards General Geoscience Artificial Intelligence System ; Geoscience foundation models represent a revolutionary approach in the field of Earth sciences by integrating massive crossdisciplinary data to simulate and understand the Earth systems dynamics. As a datacentric artificial intelligence AI paradigm, they uncover insights from petabytes of structured and unstructured data. Flexible task specification, diverse inputs and outputs and multimodal knowledge representation enable comprehensive analysis infeasible with individual data sources. Critically, the scalability and generalizability of geoscience models allow for tackling diverse prediction, simulation, and decision challenges related to Earth systems interactions. Collaboration between domain experts and computer scientists leads to innovations in these invaluable tools for understanding the past, present, and future of our planet. However, challenges remain in validation and verification, scale, interpretability, knowledge representation, and social bias. Going forward, enhancing model integration, resolution, accuracy, and equity through crossdisciplinary teamwork is key. Despite current limitations, geoscience foundation models show promise for providing critical insights into pressing issues including climate change, natural hazards, and sustainability through their ability to probe scenarios and quantify uncertainties. Their continued evolution toward integrated, datadriven modeling holds paradigmshifting potential for Earth science.
Outlieraware Inlier Modeling and Multiscale Scoring for Anomalous Sound Detection via Multitask Learning ; This paper proposes an approach for anomalous sound detection that incorporates outlier exposure and inlier modeling within a unified framework by multitask learning. While outlier exposurebased methods can extract features efficiently, it is not robust. Inlier modeling is good at generating robust features, but the features are not very effective. Recently, serial approaches are proposed to combine these two methods, but it still requires a separate training step for normal data modeling. To overcome these limitations, we use multitask learning to train a conformerbased encoder for outlieraware inlier modeling. Moreover, our approach provides multiscale scores for detecting anomalies. Experimental results on the MIMII and DCASE 2020 task 2 datasets show that our approach outperforms stateoftheart singlemodel systems and achieves comparable results with topranked multisystem ensembles.
Characterizing MRO in atomistic models of vitreous SiO2 generated using abinitio molecular dynamics ; Vitreous silica is the most versatile material for scientific and commercial applications. Although largescale atomistic models of vitreousSiO2 vSiO2 having mediumrange order MRO have been successfully developed by meltquench through classical molecular dynamics, the MRO is not well studied for the smallerscale models developed by meltquench using abinitio molecular dynamics AIMD. In this study, we obtain atomistic models of vSiO2 by performing meltquench simulation using AIMD. The final structure is compared with the experimental data and some recent atomistic models, on the basis of the structural properties. Since AIMD allows for the estimation of electronic structure, a detailed study of electronic properties is also done. It shows the presence of defect states mainly due to dangling bonds in the bandgap region of electronic density of states, whereas the edgeshared type of defective structures in the glassy models are found to contribute mainly in the valence band. In addition, Oxygen and Silicon vacancies as well as bridging Oxygen type of defects were created and their contributions to the bandgap were studied.
Incorporating Classbased Language Model for Named Entity Recognition in Factorized Neural Transducer ; In spite of the excellent strides made by endtoend E2E models in speech recognition in recent years, named entity recognition is still challenging but critical for semantic understanding. In order to enhance the ability to recognize named entities in E2E models, previous studies mainly focus on various rulebased or attentionbased contextual biasing algorithms. However, their performance might be sensitive to the biasing weight or degraded by excessive attention to the named entity list, along with a risk of false triggering. Inspired by the success of the classbased language model LM in named entity recognition in conventional hybrid systems and the effective decoupling of acoustic and linguistic information in the factorized neural Transducer FNT, we propose a novel E2E model to incorporate classbased LMs into FNT, which is referred as CFNT. In CFNT, the language model score of named entities can be associated with the name class instead of its surface form. The experimental results show that our proposed CFNT presents significant error reduction in named entities without hurting performance in general word recognition.
Foundation Model Assisted Automatic Speech Emotion Recognition Transcribing, Annotating, and Augmenting ; Significant advances are being made in speech emotion recognition SER using deep learning models. Nonetheless, training SER systems remains challenging, requiring both time and costly resources. Like many other machine learning tasks, acquiring datasets for SER requires substantial data annotation efforts, including transcription and labeling. These annotation processes present challenges when attempting to scale up conventional SER systems. Recent developments in foundational models have had a tremendous impact, giving rise to applications such as ChatGPT. These models have enhanced humancomputer interactions including bringing unique possibilities for streamlining data collection in fields like SER. In this research, we explore the use of foundational models to assist in automating SER from transcription and annotation to augmentation. Our study demonstrates that these models can generate transcriptions to enhance the performance of SER systems that rely solely on speech data. Furthermore, we note that annotating emotions from transcribed speech remains a challenging task. However, combining outputs from multiple LLMs enhances the quality of annotations. Lastly, our findings suggest the feasibility of augmenting existing speech emotion datasets by annotating unlabeled speech samples.
Automated MultiDrugs Administration During Total Intravenous Anesthesia Using MultiModel Predictive Control ; In this paper, a multimodel predictive control approach is used to automate the coadministration of propofol and remifentanil from bispectral index measurement during general anesthesia. To handle the parameter uncertainties in the nonlinear output function, multiple Extended Kalman Filters are used to estimate the state of the system in parallel. The best model is chosen using a modelmatching criterion and used in a nonlinear MPC to compute the next drug rates. The method is compared with a conventional nonlinear MPC approach and a PID from the literature. The robustness of the controller is evaluated using MonteCarlo simulations on a wide population introducing uncertainties in the models. Both simulation setup and controller codes are accessible in open source for further use. Our preliminary results show the potential interest in using a multimodel method to handle parameter uncertainties.
Recovering from PrivacyPreserving Masking with Large Language Models ; Model adaptation is crucial to handle the discrepancy between proxy training data and actual users data received. To effectively perform adaptation, textual data of users is typically stored on servers or their local devices, where downstream natural language processing NLP models can be directly trained using such indomain data. However, this might raise privacy and security concerns due to the extra risks of exposing user information to adversaries. Replacing identifying information in textual data with a generic marker has been recently explored. In this work, we leverage large language models LLMs to suggest substitutes of masked tokens and have their effectiveness evaluated on downstream language modeling tasks. Specifically, we propose multiple pretrained and finetuned LLMbased approaches and perform empirical studies on various datasets for the comparison of these methods. Experimental results show that models trained on the obfuscation corpora are able to achieve comparable performance with the ones trained on the original data without privacypreserving token masking.
A supersymmetric SYK model with a curious low energy behavior ; We consider mathcalN 2, 4 supersymmetric SYK models that have a peculiar low energy behavior, with the entropy going like S S0 textconstantTa, where a neq 1. The large N equations for these models are a generalization of equations that have been previously studied as an unjustified truncation of the planar diagrams describing the BFSS matrix quantum mechanics or other related matrix models. Here we reanalyze these equations in order to better understand the low energy physics of these models. We find that the scalar fields develop large expectation values which explore the low energy valleys in the potential. The low energy physics is dominated by quadratic fluctuations around these values. These models were previously conjectured to have a spin glass phase. We did not find any evidence for this phase by using the usual diagnostics, such as searching for replica symmetry breaking solutions.
Reformulating Sequential Recommendation Learning Dynamic User Interest with Contentenriched Language Modeling ; Recommender systems are essential for online applications, and sequential recommendation has enjoyed significant prevalence due to its expressive ability to capture dynamic user interests. However, previous sequential modeling methods still have limitations in capturing contextual information. The primary reason for this issue is that language models often lack an understanding of domainspecific knowledge and itemrelated textual content. To address this issue, we adopt a new sequential recommendation paradigm and propose LANCER, which leverages the semantic understanding capabilities of pretrained language models to generate personalized recommendations. Our approach bridges the gap between language models and recommender systems, resulting in more humanlike recommendations. We demonstrate the effectiveness of our approach through experiments on several benchmark datasets, showing promising results and providing valuable insights into the influence of our model on sequential recommendation tasks. Furthermore, our experimental codes are publicly available.
Language as the Medium Multimodal Video Classification through text only ; Despite an exciting new wave of multimodal machine learning models, current approaches still struggle to interpret the complex contextual relationships between the different modalities present in videos. Going beyond existing methods that emphasize simple activities or objects, we propose a new modelagnostic approach for generating detailed textual descriptions that captures multimodal video information. Our method leverages the extensive knowledge learnt by large language models, such as GPT3.5 or Llama2, to reason about textual descriptions of the visual and aural modalities, obtained from BLIP2, Whisper and ImageBind. Without needing additional finetuning of videotext models or datasets, we demonstrate that available LLMs have the ability to use these multimodal textual descriptions as proxies for sight'' or hearing'' and perform zeroshot multimodal classification of videos incontext. Our evaluations on popular action recognition benchmarks, such as UCF101 or Kinetics, show these contextrich descriptions can be successfully used in video understanding tasks. This method points towards a promising new research direction in multimodal classification, demonstrating how an interplay between textual, visual and auditory machine learning models can enable more holistic video understanding.
PGDiff Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance ; Exploiting pretrained diffusion models for restoration has recently become a favored alternative to the traditional taskspecific training approach. Previous works have achieved noteworthy success by limiting the solution space using explicit degradation models. However, these methods often fall short when faced with complex degradations as they generally cannot be precisely modeled. In this paper, we propose PGDiff by introducing partial guidance, a fresh perspective that is more adaptable to realworld degradations compared to existing works. Rather than specifically defining the degradation process, our approach models the desired properties, such as image structure and color statistics of highquality images, and applies this guidance during the reverse diffusion process. These properties are readily available and make no assumptions about the degradation process. When combined with a diffusion prior, this partial guidance can deliver appealing results across a range of restoration tasks. Additionally, PGDiff can be extended to handle composite tasks by consolidating multiple highquality image properties, achieved by integrating the guidance from respective tasks. Experimental results demonstrate that our method not only outperforms existing diffusionpriorbased approaches but also competes favorably with taskspecific models.
Making Small Language Models Better Multitask Learners with MixtureofTaskAdapters ; Recently, Large Language Models LLMs have achieved amazing zeroshot learning performance over a variety of Natural Language Processing NLP tasks, especially for text generative tasks. Yet, the large size of LLMs often leads to the high computational cost of model training and online deployment. In our work, we present ALTER, a system that effectively builds the multitAsk Learners with mixTureoftaskadaptERs upon small language models with 1B parameters to address multiple NLP tasks simultaneously, capturing the commonalities and differences between tasks, in order to support domainspecific applications. Specifically, in ALTER, we propose the MixtureofTaskAdapters MTA module as an extension to the transformer architecture for the underlying model to capture the intratask and intertask knowledge. A twostage training method is further proposed to optimize the collaboration between adapters at a small computational cost. Experimental results over a mixture of NLP tasks show that our proposed MTA architecture and the twostage training method achieve good performance. Based on ALTER, we have also produced MTAequipped language models for various domains.
A comparative study on maximum mass and radius of compact star from Heintzmann geometry and TOV approach ; In this article a class of anisotropic compact star is analysed in Heintzmann geometry. We have introduced the pressure anisotropy parameter alpha and solved Einstein field equations to obtain stellar model. We have considered gtt component as proposed by Heintzmann and by solving Einstein field equation, the grr component is evaluated in presence of pressure anisotropy. It is noted that for isotropic star alpha0, the maximum mass lies within the range 1.873.04 Modot for radii ranges between 813 Km. For anisotropic compact stars maximum mass increases with alpha and lies within the range 1.993.23 Modot for anisotropy parameter alpha0.5. The physical viability of the model is examined by applying our model to study the properties of few known compact objects. It is noted that all the stability conditions are fulfilled in the proposed model. It is interesting to note that maximum mass calculated from our model and from solving TOV equation are approximately same and also the predicted radius of few newly observed pulsars and companion star of GW events GW 190814 and GW 170817 from our model comply with the estimated value of radius from observation.
Examining the Limitations of Computational Rumor Detection Models Trained on Static Datasets ; A crucial aspect of a rumor detection model is its ability to generalize, particularly its ability to detect emerging, previously unknown rumors. Past research has indicated that contentbased i.e., using solely source posts as input rumor detection models tend to perform less effectively on unseen rumors. At the same time, the potential of contextbased models remains largely untapped. The main contribution of this paper is in the indepth evaluation of the performance gap between content and contextbased models specifically on detecting new, unseen rumors. Our empirical findings demonstrate that contextbased models are still overly dependent on the information derived from the rumors' source post and tend to overlook the significant role that contextual information can play. We also study the effect of data split strategies on classifier performance. Based on our experimental results, the paper also offers practical suggestions on how to minimize the effects of temporal concept drift in static datasets during the training of rumor detection methods.
A Spectral Theory of Neural Prediction and Alignment ; The representations of neural networks are often compared to those of biological systems by performing regression between the neural network responses and those measured from biological systems. Many different stateoftheart deep neural networks yield similar neural predictions, but it remains unclear how to differentiate among models that perform equally well at predicting neural responses. To gain insight into this, we use a recent theoretical framework that relates the generalization error from regression to the spectral bias of the model activations and the alignment of the neural responses onto the learnable subspace of the model. We extend this theory to the case of regression between model activations and neural responses, and define geometrical properties describing the error embedding geometry. We test a large number of deep neural networks that predict visual cortical activity and show that there are multiple types of geometries that result in low neural prediction error as measured via regression. The work demonstrates that carefully decomposing representational metrics can provide interpretability of how models are capturing neural activity and points the way towards improved models of neural activity.
Massive Endtoend Models for Short Search Queries ; In this work, we investigate two popular endtoend automatic speech recognition ASR models, namely Connectionist Temporal Classification CTC and RNNTransducer RNNT, for offline recognition of voice search queries, with up to 2B model parameters. The encoders of our models use the neural architecture of Google's universal speech model USM, with additional funnel pooling layers to significantly reduce the frame rate and speed up training and inference. We perform extensive studies on vocabulary size, time reduction strategy, and its generalization performance on longform test sets. Despite the speculation that, as the model size increases, CTC can be as good as RNNT which builds label dependency into the prediction, we observe that a 900M RNNT clearly outperforms a 1.8B CTC and is more tolerant to severe time reduction, although the WER gap can be largely removed by LM shallow fusion.
Orderpreserving Consistency Regularization for Domain Adaptation and Generalization ; Deep learning models fail on crossdomain challenges if the model is oversensitive to domainspecific attributes, e.g., lightning, background, camera angle, etc. To alleviate this problem, data augmentation coupled with consistency regularization are commonly adopted to make the model less sensitive to domainspecific attributes. Consistency regularization enforces the model to output the same representation or prediction for two views of one image. These constraints, however, are either too strict or not orderpreserving for the classification probabilities. In this work, we propose the Orderpreserving Consistency Regularization OCR for crossdomain tasks. The orderpreserving property for the prediction makes the model robust to taskirrelevant transformations. As a result, the model becomes less sensitive to the domainspecific attributes. The comprehensive experiments show that our method achieves clear advantages on five different crossdomain tasks.
Does the most sinfully decadent cake ever taste good Answering YesNo Questions from Figurative Contexts ; Figurative language is commonplace in natural language, and while making communication memorable and creative, can be difficult to understand. In this work, we investigate the robustness of Question Answering QA models on figurative text. Yesno questions, in particular, are a useful probe of figurative language understanding capabilities of large language models. We propose FigurativeQA, a set of 1000 yesno questions with figurative and nonfigurative contexts, extracted from the domains of restaurant and product reviews. We show that stateoftheart BERTbased QA models exhibit an average performance drop of up to 15 points when answering questions from figurative contexts, as compared to nonfigurative ones. While models like GPT3 and ChatGPT are better at handling figurative texts, we show that further performance gains can be achieved by automatically simplifying the figurative contexts into their nonfigurative literal counterparts. We find that the best overall model is ChatGPT with chainofthought prompting to generate nonfigurative contexts. Our work provides a promising direction for building more robust QA models with figurative language understanding capabilities.
Towards GeneralPurpose TextInstructionGuided Voice Conversion ; This paper introduces a novel voice conversion VC model, guided by text instructions such as articulate slowly with a deep tone or speak in a cheerful boyish voice. Unlike traditional methods that rely on reference utterances to determine the attributes of the converted speech, our model adds versatility and specificity to voice conversion. The proposed VC model is a neural codec language model which processes a sequence of discrete codes, resulting in the code sequence of converted speech. It utilizes text instructions as style prompts to modify the prosody and emotional information of the given speech. In contrast to previous approaches, which often rely on employing separate encoders like prosody and content encoders to handle different aspects of the source speech, our model handles various information of speech in an endtoend manner. Experiments have demonstrated the impressive capabilities of our model in comprehending instructions and delivering reasonable results.
Graph Neural Prompting with Large Language Models ; Large Language Models LLMs have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit inherent limitations in precisely capturing and returning grounded knowledge. While existing work has explored utilizing knowledge graphs to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost. In addition, how to leverage the pretrained LLMs and avoid training a customized model from scratch remains an open question. In this work, we propose Graph Neural Prompting GNP, a novel plugandplay method to assist pretrained LLMs in learning beneficial knowledge from KGs. GNP encompasses various designs, including a standard graph neural network encoder, a crossmodality pooling module, a domain projector, and a selfsupervised link prediction objective. Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks across different LLM sizes and settings.
MedEdit Model Editing for Medical Question Answering with External Knowledge Bases ; Large Language Models LLMs, although powerful in general domains, often perform poorly on domainspecific tasks like medical question answering QA. Moreover, they tend to function as blackboxes, making it challenging to modify their behavior. Addressing this, our study delves into model editing utilizing incontext learning, aiming to improve LLM responses without the need for finetuning or retraining. Specifically, we propose a comprehensive retrieval strategy to extract medical facts from an external knowledge base, and then we incorporate them into the query prompt for the LLM. Focusing on medical QA using the MedQASMILE dataset, we evaluate the impact of different retrieval models and the number of facts provided to the LLM. Notably, our edited Vicuna model exhibited an accuracy improvement from 44.46 to 48.54. This work underscores the potential of model editing to enhance LLM performance, offering a practical approach to mitigate the challenges of blackbox LLMs.
SemiPersistent Scheduling in NR Sidelink Mode 2 MAC Packet Reception Ratio Model and Validation ; 5G NR Sidelink SL has demonstrated the promising capability for infrastructureless cellular coverage. Understanding the fundamentals of the NR SL channel access mechanism, SemiPersistent Scheduling SPS, which is specified by the 3rd Generation Partnership Project 3GPP, is a necessity to enhance the NR SL Packet Reception Ratio PRR. However, most existing works fail to account for the new SPS features introduced in NR SL, which might be outofdate for comprehensively describing the NR SL PRR. The existing models ignore the relationships between SPS parameters and therefore do not provide sufficient insights into the PRR of SPS. This work proposes a novel SPS PRR model incorporating MAC collisions based on new features in NR SL. We extend our model by loosening several simplifying assumptions made in our initial modeling. The extended models illustrate how the PRR is affected by various SPS parameters. The computed results are validated via simulations using the network simulator ns3, which provides important guidelines for future NR SL enhancement work.
DataInf Efficiently Estimating Data Influence in LoRAtuned LLMs and Diffusion Models ; Quantifying the impact of training data points is crucial for understanding the outputs of machine learning models and for improving the transparency of the AI pipeline. The influence function is a principled and popular data attribution method, but its computational cost often makes it challenging to use. This issue becomes more pronounced in the setting of large language models and texttoimage models. In this work, we propose DataInf, an efficient influence approximation method that is practical for largescale generative AI models. Leveraging an easytocompute closedform expression, DataInf outperforms existing influence computation algorithms in terms of computational and memory efficiency. Our theoretical analysis shows that DataInf is particularly wellsuited for parameterefficient finetuning techniques such as LoRA. Through systematic empirical evaluations, we show that DataInf accurately approximates influence scores and is orders of magnitude faster than existing methods. In applications to RoBERTalarge, Llama213Bchat, and stablediffusionv1.5 models, DataInf effectively identifies the most influential finetuning examples better than other approximate influence scores. Moreover, it can help to identify which data points are mislabeled.
Unsupervised Roofline Extraction from True Orthophotos for LoD2 Building Model Reconstruction ; This paper discusses the reconstruction of LoD2 building models from 2D and 3D data for largescale urban environments. Traditional methods involve the use of LiDAR point clouds, but due to high costs and long intervals associated with acquiring such data for rapidly developing areas, researchers have started exploring the use of point clouds generated from oblique aerial images. However, using such point clouds for traditional plane detectionbased methods can result in significant errors and introduce noise into the reconstructed building models. To address this, this paper presents a method for extracting rooflines from true orthophotos using line detection for the reconstruction of building models at the LoD2 level. The approach is able to extract relatively complete rooflines without the need for prelabeled training data or pretrained models. These lines can directly be used in the LoD2 building model reconstruction process. The method is superior to existing plane detectionbased methods and stateoftheart deep learning methods in terms of the accuracy and completeness of the reconstructed building. Our source code is available at httpsgithub.comtudelft3dRooflineextractionfromorthophotos.
Language Models Represent Space and Time ; The capabilities of large language models LLMs have sparked debate over whether such systems just learn an enormous collection of superficial statistics or a coherent model of the data generating process a world model. We find evidence for the latter by analyzing the learned representations of three spatial datasets world, US, NYC places and three temporal datasets historical figures, artworks, news headlines in the Llama2 family of models. We discover that LLMs learn linear representations of space and time across multiple scales. These representations are robust to prompting variations and unified across different entity types e.g. cities and landmarks. In addition, we identify individual space neurons'' and time neurons'' that reliably encode spatial and temporal coordinates. Our analysis demonstrates that modern LLMs acquire structured knowledge about fundamental dimensions such as space and time, supporting the view that they learn not merely superficial statistics, but literal world models.
ResidualTransformer Residual Lowrank Learning with Weightsharing for Transformer Layers ; Memory constraint of alwayson devices is one of the major concerns when deploying speech processing models on these devices. While larger models trained with sufficiently large amount of data generally perform better, making them fit in the device memory is a demanding challenge. In this paper, we aim to reduce model size by reparameterizing model weights across Transformer encoder layers and assuming a special weight composition and structure. More specifically, inspired by ResNet and the more recent LoRA work, we propose an approach named ResidualTransformer, where each weight matrix in a Transformer layer comprises 1 a shared fullrank component with its adjacent layers, and 2 a unique lowrank component to itself. The lowrank matrices only account for a small amount of model size increase. In addition, we add diagonal weight matrices to improve modeling capacity of the lowrank matrices. Experiments of our 10khour speech recognition and speech translation tasks show that the Transformer encoder size can be reduced by 3X with very slight performance degradation.
EndtoEnd Training of a Neural HMM with Label and Transition Probabilities ; We investigate a novel modeling approach for endtoend neural network training using hidden Markov models HMM where the transition probabilities between hidden states are modeled and learned explicitly. Most contemporary sequencetosequence models allow for fromscratch training by summing over all possible label segmentations in a given topology. In our approach there are explicit, learnable probabilities for transitions between segments as opposed to a blank label that implicitly encodes duration statistics. We implement a GPUbased forwardbackward algorithm that enables the simultaneous training of label and transition probabilities. We investigate recognition results and additionally Viterbi alignments of our models. We find that while the transition model training does not improve recognition performance, it has a positive impact on the alignment quality. The generated alignments are shown to be viable targets in stateoftheart Viterbi trainings.
Black Hole Based Tests of General Relativity ; General relativity has passed all solar system experiments and neutron star based tests, such as binary pulsar observations, with flying colors. A more exotic arena for testing general relativity is in systems that contain one or more black holes. Black holes are the most compact objects in the universe, providing probes of the strongestpossible gravitational fields. We are motivated to study strongfield gravity since many theories give large deviations from general relativity only at large field strengths, while recovering the weakfield behavior. In this article, we review how one can probe general relativity and various alternative theories of gravity by using electromagnetic waves from a black hole with an accretion disk, and gravitational waves from black hole binaries. We first review modelindependent ways of testing gravity with electromagneticgravitational waves from a black hole system. We then focus on selected examples of theories that extend general relativity in rather simple ways. Some important characteristics of general relativity include but are not limited to i only tensor gravitational degrees of freedom, ii the graviton is massless, iii no quadratic or higher curvatures in the action, and iv the theory is 4 dimensional. Altering a characteristic leads to a different extension of general relativity i scalartensor theories, ii massive gravity theories, iii quadratic gravity, and iv theories with large extra dimensions. Within each theory, we describe black hole solutions, their properties, and current and projected constraints on each theory using black holebased tests of gravity. We close this review by listing some of the open problems in modelindependent tests and within each specific theory.
Phase Retrieval Under a Generative Prior ; The phase retrieval problem asks to recover a natural signal y0 in mathbbRn from m quadratic observations, where m is to be minimized. As is common in many imaging problems, natural signals are considered sparse with respect to a known basis, and the generic sparsity prior is enforced via ell1 regularization. While successful in the realm of linear inverse problems, such ell1 methods have encountered possibly fundamental limitations, as no computationally efficient algorithm for phase retrieval of a ksparse signal has been proven to succeed with fewer than Ok2log n generic measurements, exceeding the theoretical optimum of Ok log n. In this paper, we propose a novel framework for phase retrieval by 1 modeling natural signals as being in the range of a deep generative neural network G mathbbRk rightarrow mathbbRn and 2 enforcing this prior directly by optimizing an empirical risk objective over the domain of the generator. Our formulation has provably favorable global geometry for gradient methods, as soon as m Okd2log n, where d is the depth of the network. Specifically, when suitable deterministic conditions on the generator and measurement matrix are met, we construct a descent direction for any point outside of a small neighborhood around the unique global minimizer and its negative multiple, and show that such conditions hold with high probability under Gaussian ensembles of multilayer fullyconnected generator networks and measurement matrices. This formulation for structured phase retrieval thus has two advantages over sparsity based methods 1 deep generative priors can more tightly represent natural signals and 2 information theoretically optimal sample complexity. We corroborate these results with experiments showing that exploiting generative models in phase retrieval tasks outperforms sparse phase retrieval methods.
On Relativistic Generalization of Perelman's Wentropy and Statistical Thermodynamic Description of Gravitational Fields ; Using double 22 and 31 nonholonomic fibrations on Lorentz manifolds, we extend the concept of Wentropy for gravitational fields in the general relativity, GR, theory. Such F and Wfunctionals were introduced in the Ricci flow theory of three dimensional, 3d, Riemannian metrics by G. Perelman, arXiv math.DG0211159. Nonrelativistic 3d Ricci flows are characterized by associated statistical thermodynamical values determined by Wentropy. Generalizations for geometric flows of 4d pseudoRiemannian metrics are considered for models with local thermodynamical equilibrium and separation of dissipative and nondissipative processes in relativistic hydrodynamics. The approach is elaborated in the framework of classical filed theories relativistic continuum and hydrodynamic models without an underlying kinetic description which will be elaborated in other works. The 31 splitting allows us to provide a general relativistic definition of gravitational entropy in the LyapunovPerelman sense. It increases monotonically as structure forms in the Universe. We can formulate a thermodynamic description of exact solutions in GR depending, in general, on all spacetime coordinates. A corresponding 22 splitting with nonholonomic deformation of linear connection and frame structures is necessary for generating in very general form various classes of exact solutions of the Einstein and general relativistic geometric flow equations. Finally, we speculate on physical macrostates and microstate interpretations of the Wentropy in GR, geometric flow theories and possible connections to string theory a second unsolved problem also contained in Perelman's works in the Polyakov's approach.
Text2FaceGAN Face Generation from Fine Grained Textual Descriptions ; Powerful generative adversarial networks GAN have been developed to automatically synthesize realistic images from text. However, most existing tasks are limited to generating simple images such as flowers from captions. In this work, we extend this problem to the less addressed domain of face generation from finegrained textual descriptions of face, e.g., A person has curly hair, oval face, and mustache. We are motivated by the potential of automated face generation to impact and assist critical tasks such as criminal face reconstruction. Since current datasets for the task are either very small or do not contain captions, we generate captions for images in the CelebA dataset by creating an algorithm to automatically convert a list of attributes to a set of captions. We then model the highly multimodal problem of text to face generation as learning the conditional distribution of faces conditioned on text in same latent space. We utilize the current stateoftheart GAN DCGAN with GANCLS loss for learning conditional multimodality. The presence of more finegrained details and variable length of the captions makes the problem easier for a user but more difficult to handle compared to the other texttoimage tasks. We flipped the labels for real and fake images and added noise in discriminator. Generated images for diverse textual descriptions show promising results. In the end, we show how the widely used inceptions score is not a good metric to evaluate the performance of generative models used for synthesizing faces from text.
FFusionCGAN An endtoend fusion method for fewfocus images using conditional GAN in cytopathological digital slides ; Multifocus image fusion technologies compress different focus depth images into an image in which most objects are in focus. However, although existing image fusion techniques, including traditional algorithms and deep learningbased algorithms, can generate highquality fused images, they need multiple images with different focus depths in the same field of view. This criterion may not be met in some cases where time efficiency is required or the hardware is insufficient. The problem is especially prominent in largesize whole slide images. This paper focused on the multifocus image fusion of cytopathological digital slide images, and proposed a novel method for generating fused images from singlefocus or fewfocus images based on conditional generative adversarial network GAN. Through the adversarial learning of the generator and discriminator, the method is capable of generating fused images with clear textures and large depth of field. Combined with the characteristics of cytopathological images, this paper designs a new generator architecture combining UNet and DenseBlock, which can effectively improve the network's receptive field and comprehensively encode image features. Meanwhile, this paper develops a semantic segmentation network that identifies the blurred regions in cytopathological images. By integrating the network into the generative model, the quality of the generated fused images is effectively improved. Our method can generate fused images from only singlefocus or fewfocus images, thereby avoiding the problem of collecting multiple images of different focus depths with increased time and hardware costs. Furthermore, our model is designed to learn the direct mapping of input source images to fused images without the need to manually design complex activity level measurements and fusion rules as in traditional methods.
PerceptionGAN Realworld Image Construction from Provided Text through Perceptual Understanding ; Generating an image from a provided descriptive text is quite a challenging task because of the difficulty in incorporating perceptual information object shapes, colors, and their interactions along with providing high relevancy related to the provided text. Current methods first generate an initial lowresolution image, which typically has irregular object shapes, colors, and interaction between objects. This initial image is then improved by conditioning on the text. However, these methods mainly address the problem of using text representation efficiently in the refinement of the initially generated image, while the success of this refinement process depends heavily on the quality of the initially generated image, as pointed out in the DMGAN paper. Hence, we propose a method to provide good initialized images by incorporating perceptual understanding in the discriminator module. We improve the perceptual information at the first stage itself, which results in significant improvement in the final generated image. In this paper, we have applied our approach to the novel StackGAN architecture. We then show that the perceptual information included in the initial image is improved while modeling image distribution at multiple stages. Finally, we generated realistic multicolored images conditioned by text. These images have good quality along with containing improved basic perceptual information. More importantly, the proposed method can be integrated into the pipeline of other stateoftheart textbasedimagegeneration models to generate initial lowresolution images. We also worked on improving the refinement process in StackGAN by augmenting the third stage of the generatordiscriminator pair in the StackGAN architecture. Our experimental analysis and comparison with the stateoftheart on a large but sparse dataset MS COCO further validate the usefulness of our proposed approach.
When Do Extended PhysicsInformed Neural Networks XPINNs Improve Generalization ; Physicsinformed neural networks PINNs have become a popular choice for solving highdimensional partial differential equations PDEs due to their excellent approximation power and generalization ability. Recently, Extended PINNs XPINNs based on domain decomposition methods have attracted considerable attention due to their effectiveness in modeling multiscale and multiphysics problems and their parallelization. However, theoretical understanding on their convergence and generalization properties remains unexplored. In this study, we take an initial step towards understanding how and when XPINNs outperform PINNs. Specifically, for general multilayer PINNs and XPINNs, we first provide a prior generalization bound via the complexity of the target functions in the PDE problem, and a posterior generalization bound via the posterior matrix norms of the networks after optimization. Moreover, based on our bounds, we analyze the conditions under which XPINNs improve generalization. Concretely, our theory shows that the key building block of XPINN, namely the domain decomposition, introduces a tradeoff for generalization. On the one hand, XPINNs decompose the complex PDE solution into several simple parts, which decreases the complexity needed to learn each part and boosts generalization. On the other hand, decomposition leads to less training data being available in each subdomain, and hence such model is typically prone to overfitting and may become less generalizable. Empirically, we choose five PDEs to show when XPINNs perform better than, similar to, or worse than PINNs, hence demonstrating and justifying our new theory.
Improving the quality of generative models through Smirnov transformation ; Solving the convergence issues of Generative Adversarial Networks GANs is one of the most outstanding problems in generative models. In this work, we propose a novel activation function to be used as output of the generator agent. This activation function is based on the Smirnov probabilistic transformation and it is specifically designed to improve the quality of the generated data. In sharp contrast with previous works, our activation function provides a more general approach that deals not only with the replication of categorical variables but with any type of data distribution continuous or discrete. Moreover, our activation function is derivable and therefore, it can be seamlessly integrated in the backpropagation computations during the GAN training processes. To validate this approach, we evaluate our proposal against two different data sets a an artificially rendered data set containing a mixture of discrete and continuous variables, and b a real data set of flowbased network traffic data containing both normal connections and cryptomining attacks. To evaluate the fidelity of the generated data, we analyze both their results in terms of quality measures of statistical nature and also regarding the use of these synthetic data to feed a nested machine learningbased classifier. The experimental results evince a clear outperformance of the GAN network tuned with this new activation function with respect to both a naive meanbased generator and a standard GAN. The quality of the data is so high that the generated data can fully substitute real data for training the nested classifier without a fall in the obtained accuracy. This result encourages the use of GANs to produce highquality synthetic data that are applicable in scenarios in which data privacy must be guaranteed.
On generating parametrised structural data using conditional generative adversarial networks ; A powerful approach, and one of the most common ones in structural health monitoring SHM, is to use datadriven models to make predictions and inferences about structures and their condition. Such methods almost exclusively rely on the quality of the data. Within the SHM discipline, data do not always suffice to build models with satisfactory accuracy for given tasks. Even worse, data may be completely missing from one's dataset, regarding the behaviour of a structure under different environmental conditions. In the current work, with a view to confronting such issues, the generation of artificial data using a variation of the generative adversarial network GAN algorithm, is used. The aforementioned variation is that of the conditional GAN or cGAN. The algorithm is not only used to generate artificial data, but also to learn transformations of manifolds according to some known parameters. Assuming that the structure's response is represented by points in a manifold, part of the space will be formed due to variations in external conditions affecting the structure. This idea proves efficient in SHM, as it is exploited to generate structural data for specific values of environmental coefficients. The scheme is applied here on a simulated structure which operates under different temperature and humidity conditions. The cGAN is trained on data for some discrete values of the temperature within some range, and is able to generate data for every temperature in this range with satisfactory accuracy. The novelty, compared to classic regression in similar problems, is that the cGAN allows unknown environmental parameters to affect the structure and can generate whole manifolds of data for every value of the known parameters, while the unknown ones vary within the generated manifolds.
Large Language Models are Fewshot Testers Exploring LLMbased General Bug Reproduction ; Many automated test generation techniques have been developed to aid developers with writing tests. To facilitate full automation, most existing techniques aim to either increase coverage, or generate exploratory inputs. However, existing test generation techniques largely fall short of achieving more semantic objectives, such as generating tests to reproduce a given bug report. Reproducing bugs is nonetheless important, as our empirical study shows that the number of tests added in open source repositories due to issues was about 28 of the corresponding project test suite size. Meanwhile, due to the difficulties of transforming the expected program semantics in bug reports into test oracles, existing failure reproduction techniques tend to deal exclusively with program crashes, a small subset of all bug reports. To automate test generation from general bug reports, we propose LIBRO, a framework that uses Large Language Models LLMs, which have been shown to be capable of performing coderelated tasks. Since LLMs themselves cannot execute the target buggy code, we focus on postprocessing steps that help us discern when LLMs are effective, and rank the produced tests according to their validity. Our evaluation of LIBRO shows that, on the widely studied Defects4J benchmark, LIBRO can generate failure reproducing test cases for 33 of all studied cases 251 out of 750, while suggesting a bug reproducing test in first place for 149 bugs. To mitigate data contamination, we also evaluate LIBRO against 31 bug reports submitted after the collection of the LLM training data terminated LIBRO produces bug reproducing tests for 32 of the studied bug reports. Overall, our results show LIBRO has the potential to significantly enhance developer efficiency by automatically generating tests from bug reports.
Exploring the Advantages of Quantum Generative Adversarial Networks in Generative Chemistry ; De novo drug design with desired biological activities is crucial for developing novel therapeutics for patients. The drug development process is time and resourceconsuming, and it has a low probability of success. Recent advances in machine learning and deep learning technology have reduced the time and cost of the discovery process and therefore, improved pharmaceutical research and development. In this paper, we explore the combination of two rapidlydeveloping fields with lead candidate discovery in the drug development process. First, Artificial intelligence has already been demonstrated to successfully accelerate conventional drug design approaches. Second, quantum computing has demonstrated promising potential in different applications, such as quantum chemistry, combinatorial optimizations, and machine learning. This manuscript explores hybrid quantumclassical generative adversarial networks GAN for small molecule discovery. We substituted each element of GAN with a variational quantum circuit VQC and demonstrated the quantum advantages in the small drug discovery. Utilizing a VQC in the noise generator of a GAN to generate small molecules achieves better physicochemical properties and performance in the goaldirected benchmark than the classical counterpart. Moreover, we demonstrate the potential of a VQC with only tens of learnable parameters in the generator of GAN to generate small molecules. We also demonstrate the quantum advantage of a VQC in the discriminator of GAN. In this hybrid model, the number of learnable parameters is significantly less than the classical ones, and it can still generate valid molecules. The hybrid model with only tens of training parameters in the quantum discriminator outperforms the MLPbased one in terms of both generated molecule properties and the achieved KL divergence.
Formalism of general boundary conditions for continuum models ; Continuum models are particularly appealing for theoretical studies of bound states, due to simplicity of their bulk Hamiltonians. The main challenge on this path is a systematic description of the boundary, which comes down to determining proper boundary conditions BCs. BCs are a consequence of the fundamental principle of quantum mechanics norm conservation of the wave function, which leads to the conservation of the probability current at the boundary. The notion of em general BCs arises, as a family of all possible BCs that satisfy the currentconservation principle. Ahari, Ortiz, and Seradjeh formulated a systematic derivation procedure of the general BCs from the currentconservation principle for the 1D Hamiltonian of the most general form. The procedure is based on the diagonalization of the current and leads to the universal standardized'' form of the general BCs, parameterized in a nonredundant onetoone way by unitary matrices. In this work, we substantiate, elucidate, and expand this em formalism of general boundary conditions for continuum models, addressing in detail a number of important physical and mathematical points. We provide a detailed derivation of the general BCs from the currentconservation principle and establish the conditions for when they are admissible in the sense that they describe a welldefined boundary, which is directly related to a subtle but crucial distinction between selfadjoint hermitian and only symmetric operators. We provide a natural physical interpretation of the structure of the general BCs as a scattering process and an essential mathematical justification that the formalism is welldefined for Hamiltonians of momentum order higher than linear. We discuss the physical meaning of the general BCs and outline the application schemes of the formalism, in particular, for the study of bound states in topological systems.
Generating HighQuality Emotion Arcs For LowResource Languages Using Emotion Lexicons ; Automatically generated emotion arcs that capture how an individual or a population feels over time are widely used in industry and research. However, there is little work on evaluating the generated arcs in English where the emotion resources are available and no work on generating or evaluating emotion arcs for lowresource languages. Work on generating emotion arcs in lowresource languages such as those indigenous to Africa, the Americas, and Australia is stymied by the lack of emotionlabeled resources and large language models for those languages. Work on evaluating emotion arcs for any language is scarce because of the difficulty of establishing the true gold emotion arc. Our work, for the first time, systematically and quantitatively evaluates automatically generated emotion arcs. We also compare two common ways of generating emotion arcs MachineLearning ML models and LexiconOnly LexO methods. By running experiments on 42 diverse datasets in 9 languages, we show that despite being markedly poor at instance level emotion classification, LexO methods are highly accurate at generating emotion arcs when aggregating information from hundreds of instances. Predicted arcs have correlations ranging from 0.94 to 0.99 with the gold arcs for various emotions. We also show that for languages with no emotion lexicons, automatic translations of English emotion lexicons can be used to generate highquality emotion arcs correlations above 0.9 with the gold emotion arcs in all six indigenous African languages explored. This opens up avenues for work on emotions in numerous languages from around the world; crucial not only for commerce, public policy, and health research in service of speakers of those languages, but also to draw meaningful conclusions in emotionpertinent research using information from around the world thereby avoiding a westerncentric bias in research.
Learning Profitable NFT Image Diffusions via Multiple VisualPolicy Guided Reinforcement Learning ; We study the task of generating profitable NonFungible Token NFT images from userinput texts. Recent advances in diffusion models have shown great potential for image generation. However, existing works can fall short in generating visuallypleasing and highlyprofitable NFT images, mainly due to the lack of 1 plentiful and finegrained visual attribute prompts for an NFT image, and 2 effective optimization metrics for generating highquality NFT images. To solve these challenges, we propose a Diffusionbased generation framework with Multiple VisualPolicies as rewards i.e., DiffusionMVP for NFT images. The proposed framework consists of a large language model LLM, a diffusionbased image generator, and a series of visual rewards by design. First, the LLM enhances a basic human input such as panda by generating more comprehensive NFTstyle prompts that include specific visual attributes, such as panda with Ninja style and green background. Second, the diffusionbased image generator is finetuned using a largescale NFT dataset to capture finegrained image styles and accessory compositions of popular NFT elements. Third, we further propose to utilize multiple visualpolicies as optimization goals, including visual rarity levels, visual aesthetic scores, and CLIPbased textimage relevances. This design ensures that our proposed DiffusionMVP is capable of minting NFT images with high visual quality and market value. To facilitate this research, we have collected the largest publicly available NFT image dataset to date, consisting of 1.5 million highquality images with corresponding texts and market values. Extensive experiments including objective evaluations and user studies demonstrate that our framework can generate NFT images showing more visually engaging elements and higher market value, compared with SOTA approaches.
Graph Contrastive Learning with Generative Adversarial Network ; Graph Neural Networks GNNs have demonstrated promising results on exploiting node representations for many downstream tasks through supervised endtoend training. To deal with the widespread label scarcity issue in realworld applications, Graph Contrastive Learning GCL is leveraged to train GNNs with limited or even no labels by maximizing the mutual information between nodes in its augmented views generated from the original graph. However, the distribution of graphs remains unconsidered in view generation, resulting in the ignorance of unseen edges in most existing literature, which is empirically shown to be able to improve GCL's performance in our experiments. To this end, we propose to incorporate graph generative adversarial networks GANs to learn the distribution of views for GCL, in order to i automatically capture the characteristic of graphs for augmentations, and ii jointly train the graph GAN model and the GCL model. Specifically, we present GACN, a novel Generative Adversarial Contrastive learning Network for graph representation learning. GACN develops a view generator and a view discriminator to generate augmented views automatically in an adversarial style. Then, GACN leverages these views to train a GNN encoder with two carefully designed selfsupervised learning losses, including the graph contrastive loss and the Bayesian personalized ranking Loss. Furthermore, we design an optimization framework to train all GACN modules jointly. Extensive experiments on seven realworld datasets show that GACN is able to generate highquality augmented views for GCL and is superior to twelve stateoftheart baseline methods. Noticeably, our proposed GACN surprisingly discovers that the generated views in data augmentation finally conform to the wellknown preferential attachment rule in online networks.