text
stringlengths
62
2.94k
Training Quantum Boltzmann Machines with the Variational Quantum Eigensolver ; The quantum Boltzmann machine QBM is a generative machine learning model for both classical data and quantum states. The training of the QBM consists of minimizing the relative entropy from the model to the target state. This requires QBM expectation values which are computationally intractable for large models in general. It is therefore important to develop heuristic training methods that work well in practice. In this work, we study a heuristic method characterized by a nested loop the inner loop trains the betavariational quantum eigensolver betaVQE by Liu et al. 2021 to approximate the QBM expectation values; the outer loop trains the QBM to minimize the relative entropy to the target. We show that lowrank representations obtained by betaVQE provide an efficient way to learn lowrank target states, such as classical data and lowtemperature quantum tomography. We test the method on both classical and quantum target data with numerical simulations of up to 10 qubits. For the cases considered here, the obtained QBMs can model the target to high fidelity. The approach offers a valuable route towards variationally training QBMs on nearterm quantum devices.
Modulating human brain responses via optimal natural image selection and synthetic image generation ; Understanding how human brains interpret and process information is important. Here, we investigated the selectivity and interindividual differences in human brain responses to images via functional MRI. In our first experiment, we found that images predicted to achieve maximal activations using a group level encoding model evoke higher responses than images predicted to achieve average activations, and the activation gain is positively associated with the encoding model accuracy. Furthermore, aTLfaces and FBA1 had higher activation in response to maximal synthetic images compared to maximal natural images. In our second experiment, we found that synthetic images derived using a personalized encoding model elicited higher responses compared to synthetic images from grouplevel or other subjects' encoding models. The finding of aTLfaces favoring synthetic images than natural images was also replicated. Our results indicate the possibility of using datadriven and generative approaches to modulate macroscale brain region responses and probe interindividual differences in and functional specialization of the human visual system.
Can Perturbations Help Reduce Investment Risks RiskAware Stock Recommendation via Split Variational Adversarial Training ; In the stock market, a successful investment requires a good balance between profits and risks. Recently, stock recommendation has been widely studied in quantitative investment to select stocks with higher return ratios for investors. Despite the success in making profits, most existing recommendation approaches are still weak in risk control, which may lead to intolerable paper losses in practical stock investing. To effectively reduce risks, we draw inspiration from adversarial perturbations and propose a novel Split Variational Adversarial Training SVAT framework for riskaware stock recommendation. Essentially, SVAT encourages the model to be sensitive to adversarial perturbations of risky stock examples and enhances the model's risk awareness by learning from perturbations. To generate representative adversarial examples as risk indicators, we devise a variational perturbation generator to model diverse risk factors. Particularly, the variational architecture enables our method to provide a rough risk quantification for investors, showing an additional advantage of interpretability. Experiments on three realworld stock market datasets show that SVAT effectively reduces the volatility of the stock recommendation model and outperforms stateoftheart baseline methods by more than 30 in terms of riskadjusted profits.
LaMP When Large Language Models Meet Personalization ; This paper highlights the importance of personalization in the current state of natural language understanding and generation and introduces the LaMP benchmark a novel benchmark for training and evaluating language models for producing personalized outputs. LaMP offers a comprehensive evaluation framework with diverse language tasks and multiple entries for each user profile. It consists of seven personalized tasks, spanning three classification and four text generation tasks. We also propose a retrieval augmentation approach that retrieves personalized items from user profiles to construct personalized prompts for large language models. Our baseline zeroshot and finetuned model results indicate that LMs utilizing profile augmentation outperform their counterparts that do not factor in profile information.
Equations discovery of organized cloud fields Stochastic generator and dynamical insights ; The emergence of organized multiscale patterns resulting from convection is ubiquitous, observed throughout different cloud types. The reproduction of such patterns by general circulation models remains a challenge due to the complex nature of clouds, characterized by processes interacting over a wide range of spatiotemporal scales. The new advances in datadriven modeling techniques have raised a lot of promises to discover dynamical equations from partial observations of complex systems. This study presents such a discovery from highresolution satellite datasets of continental cloud fields. The model is made of stochastic differential equations able to simulate with high fidelity the spatiotemporal coherence and variability of the cloud patterns such as the characteristic lifetime of individual clouds or global organizational features governed by convective inertia gravity waves. This feat is achieved through the model's lagged effects associated with convection recirculation times, and hidden variables parameterizing the unobserved processes and variables.
Halo Formation from Yukawa Forces in the Very Early Universe ; If longrange attractive forces exist and are stronger than gravity then cosmic halo formation can begin in the radiationdominated era. We study a simple realization of this effect in a system where dark matter fermions have Yukawa interactions mediated by scalar particles, analogous to the Higgs boson in the standard model. We develop a selfconsistent description of the system including exact background dynamics of the scalar field, and precise modelling of the fermion density fluctuations. For the latter, we provide accurate approximations for the linear growth as well as quantitative modelling of the nonlinear evolution using Nbody simulations. We find that halo formation occurs exponentially fast and on scales substantially larger than simple estimates predict. The final fate of these halos remains uncertain, but could be annihilation, dark stars, primordial black holes, or even the existence of galaxysized halos at matterradiation equality. More generally, our results demonstrate the importance of mapping scalarmediated interactions onto structure formation outcomes and constraints for beyond the standard model theories.
Hopfield model with planted patterns a teacherstudent selfsupervised learning model ; While Hopfield networks are known as paradigmatic models for memory storage and retrieval, modern artificial intelligence systems mainly stand on the machine learning paradigm. We show that it is possible to formulate a teacherstudent selfsupervised learning problem with Boltzmann machines in terms of a suitable generalization of the Hopfield model with structured patterns, where the spin variables are the machine weights and patterns correspond to the training set's examples. We analyze the learning performance by studying the phase diagram in terms of the training set size, the dataset noise and the inference temperature i.e. the weight regularization. With a small but informative dataset the machine can learn by memorization. With a noisy dataset, an extensive number of examples above a critical threshold is needed. In this regime the memory storage limits of the system becomes an opportunity for the occurrence of a learning regime in which the system can generalize.
Multifield inflation with large scalar fluctuations nonGaussianity and perturbativity ; Recently multifield inflation models that can produce large scalar fluctuations on small scales have drawn a lot of attention, primarily because they could lead to primordial black hole production and generation of large secondorder gravitational waves. In this work, we focus on models where the scalar fields responsible for inflation live on a hyperbolic field space. In this case, geometrical destabilisation and nongeodesic motion are responsible for the peak in the scalar power spectrum. We present new results for scalar nonGaussianity and discuss its dependence on the model's parameters. On scales around the peak, we typically find that the nonGaussianity is large and close to local in form. We validate our results by employing two different numerical techniques, utilising the transport approach, based on full cosmological perturbation theory, and the delta N formalism, based on the separate universe approximation. We discuss implications of our results for the perturbativity of the underlying theory, focusing in particular on versions of these models with potentially relevant phenomenology at interferometer scales.
LowRank Structured MMSE Channel Estimation with Mixtures of Factor Analyzers ; This work proposes a generative modelingaided channel estimator based on mixtures of factor analyzers MFA. In an offline step, the parameters of the generative model are inferred via an expectationmaximization EM algorithm in order to learn the underlying channel distribution of a whole communication scenario inside a base station BS cell. Thereby, the wireless channels are effectively modeled on a piecewise linear subspace which is achieved by the lowrank structure of the learned covariances of the MFA. This suits the lowrank structure of wireless channels at high frequencies and additionally saves parameters and prevents overfitting. Afterwards, the trained MFA model is used online to perform channel estimation with a closedform solution of the estimator which asymptotically converges to the minimum mean square error MMSE estimator. Numerical results based on realworld measurements demonstrate the great potential of the proposed approach for channel estimation.
Unsupervised Discovery of 3D Hierarchical Structure with Generative Diffusion Features ; Inspired by recent findings that generative diffusion models learn semantically meaningful representations, we use them to discover the intrinsic hierarchical structure in biomedical 3D images using unsupervised segmentation. We show that features of diffusion models from different stages of a UNetbased ladderlike architecture capture different hierarchy levels in 3D biomedical images. We design three losses to train a predictive unsupervised segmentation network that encourages the decomposition of 3D volumes into meaningful nested subvolumes that represent a hierarchy. First, we pretrain 3D diffusion models and use the consistency of their features across subvolumes. Second, we use the visual consistency between subvolumes. Third, we use the invariance to photometric augmentations as a regularizer. Our models achieve better performance than prior unsupervised structure discovery approaches on challenging biologicallyinspired synthetic datasets and on a realworld brain tumor MRI dataset.
Learning Locally Editable Virtual Humans ; In this paper, we propose a novel hybrid representation and endtoend trainable network architecture to model fully editable and customizable neural avatars. At the core of our work lies a representation that combines the modeling power of neural fields with the ease of use and inherent 3D consistency of skinned meshes. To this end, we construct a trainable feature codebook to store local geometry and texture features on the vertices of a deformable body model, thus exploiting its consistent topology under articulation. This representation is then employed in a generative autodecoder architecture that admits fitting to unseen scans and sampling of realistic avatars with varied appearances and geometries. Furthermore, our representation allows local editing by swapping local features between 3D assets. To verify our method for avatar creation and editing, we contribute a new highquality dataset, dubbed CustomHumans, for training and evaluation. Our experiments quantitatively and qualitatively show that our method generates diverse detailed avatars and achieves better model fitting performance compared to stateoftheart methods. Our code and dataset are available at httpscustomhumans.github.io.
Synthetic Data for Face Recognition Current State and Future Prospects ; Over the past years, deep learning capabilities and the availability of largescale training datasets advanced rapidly, leading to breakthroughs in face recognition accuracy. However, these technologies are foreseen to face a major challenge in the next years due to the legal and ethical concerns about using authentic biometric data in AI model training and evaluation along with increasingly utilizing datahungry stateoftheart deep learning models. With the recent advances in deep generative models and their success in generating realistic and highresolution synthetic image data, privacyfriendly synthetic data has been recently proposed as an alternative to privacysensitive authentic data to overcome the challenges of using authentic data in face recognition development. This work aims at providing a clear and structured picture of the usecases taxonomy of synthetic face data in face recognition along with the recent emerging advances of face recognition models developed on the bases of synthetic data. We also discuss the challenges facing the use of synthetic data in face recognition development and several future prospects of synthetic data in the domain of face recognition.
H2 optimal model reduction on general domains ; Optimal model reduction for largescale linear dynamical systems is studied. In contrast to most existing works, the systems under consideration are not required to be stable, neither in discrete nor in continuous time. As a consequence, the underlying rational transfer functions are allowed to have poles in general domains in the complex plane. In particular, this covers the case of specific conservative partial differential equations such as the linear Schrodinger and the undamped linear wave equation with spectra on the imaginary axis. By an appropriate modification of the classical continuous time Hardy space mathcalH2, a new mathcalH2 like optimal model reduction problem is introduced and first order optimality conditions are derived. As in the classical mathcalH2 case, these conditions exhibit a rational Hermite interpolation structure for which an iterative model reduction algorithm is proposed. Numerical examples demonstrate the effectiveness of the new method.
Huatuo26M, a Largescale Chinese Medical QA Dataset ; In this paper, we release a largest ever medical Question Answering QA dataset with 26 million QA pairs. We benchmark many existing approaches in our dataset in terms of both retrieval and generation. Experimental results show that the existing models perform far lower than expected and the released dataset is still challenging in the pretrained language model era. Moreover, we also experimentally show the benefit of the proposed dataset in many aspects i trained models for other QA datasets in a zeroshot fashion; and ii as external knowledge for retrievalaugmented generation RAG; and iii improving existing pretrained language models by using the QA pairs as a pretraining corpus in continued training manner. We believe that this dataset will not only contribute to medical research but also facilitate both the patients and clinical doctors. See urlhttpsgithub.comFreedomIntelligenceHuatuo26M.
Responseconditioned Turntaking Prediction ; Previous approaches to turntaking and response generation in conversational systems have treated it as a twostage process First, the end of a turn is detected based on conversation history, then the system generates an appropriate response. Humans, however, do not take the turn just because it is likely, but also consider whether what they want to say fits the position. In this paper, we present a model an extension of TurnGPT that conditions the endofturn prediction on both conversation history and what the next speaker wants to say. We found that our model consistently outperforms the baseline model in a variety of metrics. The improvement is most prominent in two scenarios where turn predictions can be ambiguous solely from the conversation history 1 when the current utterance contains a statement followed by a question; 2 when the end of the current utterance semantically matches the response. Treating the turnprediction and responseranking as a onestage process, our findings suggest that our model can be used as an incremental response ranker, which can be applied in various settings.
Regression in quotient metric spaces with a focus on elastic curves ; We propose regression models for curvevalued responses in two or more dimensions, where only the image but not the parametrization of the curves is of interest. Examples of such data are handwritten letters, movement paths or outlines of objects. In the squarerootvelocity framework, a parametrization invariant distance for curves is obtained as the quotient space metric with respect to the action of reparametrization, which is by isometries. With this special case in mind, we discuss the generalization of 'linear' regression to quotient metric spaces more generally, before illustrating the usefulness of our approach for curves modulo reparametrization. We address the issue of sparsely or irregularly sampled curves by using splines for modeling smooth conditional mean curves. We test this model in simulations and apply it to human hippocampal outlines, obtained from Magnetic Resonance Imaging scans. Here we model how the shape of the irregularly sampled hippocampus is related to age, Alzheimer's disease and sex.
Masked Trajectory Models for Prediction, Representation, and Control ; We introduce Masked Trajectory Models MTM as a generic abstraction for sequential decision making. MTM takes a trajectory, such as a stateaction sequence, and aims to reconstruct the trajectory conditioned on random subsets of the same trajectory. By training with a highly randomized masking pattern, MTM learns versatile networks that can take on different roles or capabilities, by simply choosing appropriate masks at inference time. For example, the same MTM network can be used as a forward dynamics model, inverse dynamics model, or even an offline RL agent. Through extensive experiments in several continuous control tasks, we show that the same MTM network i.e. same weights can match or outperform specialized networks trained for the aforementioned capabilities. Additionally, we find that state representations learned by MTM can significantly accelerate the learning speed of traditional RL algorithms. Finally, in offline RL benchmarks, we find that MTM is competitive with specialized offline RL algorithms, despite MTM being a generic selfsupervised learning method without any explicit RL components. Code is available at httpsgithub.comfacebookresearchmtm
Prospects for Bc and Bto at FCCee ; The prospects are presented for precise measurements of the branching ratios of the purely leptonic Bc to tau nutau and B to tau nutau decays at the Future Circular Collider FCC. Common FCC software tools are employed in all steps of this study. This work is focused on the hadronic tau to pi pi pi barnutau decay in both Bc to tau nutau and B to tau nutau processes. Events are selected with two Boosted Decision Tree algorithms to optimise the separation between the two signal processes as well as the generic hadronic Z decay backgrounds. The range of the expected precision for both signals are evaluated in different scenarios of nonideal background modelling. The theoretical impacts of such measurements are discussed in both the Standard Model context, for measurements of CKM matrix elements, as well as New Physics cases, for interpretations in the generic TwoHiggsdoublet model and leptoquark models.
Are VAEs Bad at Reconstructing Molecular Graphs ; Many contemporary generative models of molecules are variational autoencoders of molecular graphs. One term in their training loss pertains to reconstructing the input, yet reconstruction capabilities of stateoftheart models have not yet been thoroughly compared on a large and chemically diverse dataset. In this work, we show that when several stateoftheart generative models are evaluated under the same conditions, their reconstruction accuracy is surprisingly low, worse than what was previously reported on seemingly harder datasets. However, we show that improving reconstruction does not directly lead to better sampling or optimization performance. Failed reconstructions from the MoLeR model are usually similar to the inputs, assembling the same motifs in a different way, and possess similar chemical properties such as solubility. Finally, we show that the input molecule and its failed reconstruction are usually mapped by the different encoders to statistically distinguishable posterior distributions, hinting that posterior collapse may not fully explain why VAEs are bad at reconstructing molecular graphs.
Can Large Language Models Transform Computational Social Science ; Large Language Models LLMs like ChatGPT are capable of successfully performing many language processing tasks zeroshot without the need for training data. If this capacity also applies to the coding of social phenomena like persuasiveness and political ideology, then LLMs could effectively transform Computational Social Science CSS. This work provides a road map for using LLMs as CSS tools. Towards this end, we contribute a set of prompting best practices and an extensive evaluation pipeline to measure the zeroshot performance of 13 language models on 24 representative CSS benchmarks. On taxonomic labeling tasks classification, LLMs fail to outperform the best finetuned models but still achieve fair levels of agreement with humans. On freeform coding tasks generation, LLMs produce explanations that often exceed the quality of crowdworkers' gold references. We conclude that today's LLMs can radically augment the CSS research pipeline in two ways 1 serving as zeroshot data annotators on human annotation teams, and 2 bootstrapping challenging creative generation tasks e.g., explaining the hidden meaning behind text. In summary, LLMs can significantly reduce costs and increase efficiency of social science analysis in partnership with humans.
Numerical study of Weibel instability driven by anisotropic electron temperature in collisionless plasmas ; We numerically investigate the process of generating magnetic fields from temperature anisotropy of electrons in collisionless initially uniform plasmas. We use a fully kinetic modeling and compare it against a hybrid modeling which treats ions kinetically and use tenmoment fluid model for electrons. The results of the onetoone comparison show a good agreement in terms of the maximal magnitude of the selfgenerated magnetic field and similar trends during the nonlinear stage of the instability. Additionally, we performed hybrid modelling of the instability without resolving electron spatial scales. In this case the results are only qualitatively the same however it shows that hydrodynamical approach can be used to some extent for the simulation of the Weibel instability in largescale systems, including astrophysical environments and laserproduced plasmas.
Large Language Models in Sport Science Medicine Opportunities, Risks and Considerations ; This paper explores the potential opportunities, risks, and challenges associated with the use of large language models LLMs in sports science and medicine. LLMs are large neural networks with transformer style architectures trained on vast amounts of textual data, and typically refined with human feedback. LLMs can perform a large range of natural language processing tasks. In sports science and medicine, LLMs have the potential to support and augment the knowledge of sports medicine practitioners, make recommendations for personalised training programs, and potentially distribute highquality information to practitioners in developing countries. However, there are also potential risks associated with the use and development of LLMs, including biases in the dataset used to create the model, the risk of exposing confidential data, the risk of generating harmful output, and the need to align these models with human preferences through feedback. Further research is needed to fully understand the potential applications of LLMs in sports science and medicine and to ensure that their use is ethical and beneficial to athletes, clients, patients, practitioners, and the general public.
Prompt What You Need Enhancing Segmentation in Rainy Scenes with Anchorbased Prompting ; Semantic segmentation in rainy scenes is a challenging task due to the complex environment, class distribution imbalance, and limited annotated data. To address these challenges, we propose a novel framework that utilizes semisupervised learning and pretrained segmentation foundation model to achieve superior performance. Specifically, our framework leverages the semisupervised model as the basis for generating raw semantic segmentation results, while also serving as a guiding force to prompt pretrained foundation model to compensate for knowledge gaps with entropybased anchors. In addition, to minimize the impact of irrelevant segmentation masks generated by the pretrained foundation model, we also propose a mask filtering and fusion mechanism that optimizes raw semantic segmentation results based on the principle of minimum risk. The proposed framework achieves superior segmentation performance on the Rainy WCity dataset and is awarded the first prize in the subtrack of STRAIN in ICME 2023 Grand Challenges.
Towards Better Graph Representation Learning with Parameterized Decomposition Filtering ; Proposing an effective and flexible matrix to represent a graph is a fundamental challenge that has been explored from multiple perspectives, e.g., filtering in Graph Fourier Transforms. In this work, we develop a novel and general framework which unifies many existing GNN models from the view of parameterized decomposition and filtering, and show how it helps to enhance the flexibility of GNNs while alleviating the smoothness and amplification issues of existing models. Essentially, we show that the extensively studied spectral graph convolutions with learnable polynomial filters are constrained variants of this formulation, and releasing these constraints enables our model to express the desired decomposition and filtering simultaneously. Based on this generalized framework, we develop models that are simple in implementation but achieve significant improvements and computational efficiency on a variety of graph learning tasks. Code is available at httpsgithub.comqslimPDF.
An Empirical Study on the Robustness of the Segment Anything Model SAM ; The Segment Anything Model SAM is a foundation model for general image segmentation. Although it exhibits impressive performance predominantly on natural images, understanding its robustness against various image perturbations and domains is critical for realworld applications where such challenges frequently arise. In this study we conduct a comprehensive robustness investigation of SAM under diverse realworld conditions. Our experiments encompass a wide range of image perturbations. Our experimental results demonstrate that SAM's performance generally declines under perturbed images, with varying degrees of vulnerability across different perturbations. By customizing prompting techniques and leveraging domain knowledge based on the unique characteristics of each dataset, the model's resilience to these perturbations can be enhanced, addressing datasetspecific challenges. This work sheds light on the limitations and strengths of SAM in realworld applications, promoting the development of more robust and versatile image segmentation solutions.
White dwarf cooling in fR,T gravity ; In recent times, astounding observations of both over and underluminous type Ia supernovae have emerged. These peculiar observations hint not only at surpassing the Chandrasekhar limit but may also suggest potential modifications in the physical attributes of their progenitors, such as their cooling rate. This, in turn, can influence their temporal assessments and provide a compelling explanation for these intriguing observations. In this spirit, we investigate here the cooling process of white dwarfs in fR,T gravity with the simplest model fR,T R lambda T, where lambda is the model parameter. Our modelling suggests that the cooling timescale of white dwarfs exhibits an inverse relationship with the model parameter lambda, which implies that for identical initial conditions, white dwarfs in fR,T gravity cool faster. This further unveils that in the realm of fR,T gravity, the energy release rate for white dwarfs increases as lambda increases. Furthermore, we also report that the luminosity of the white dwarfs also depends on lambda and an upswing in lambda leads to an amplification in the luminosity, and consequently a larger white dwarf in general relativity can exhibit comparable luminosity to a smaller white dwarf in fR,T gravity.
PMCVQA Visual Instruction Tuning for Medical Visual Question Answering ; In this paper, we focus on the problem of Medical Visual Question Answering MedVQA, which is crucial in efficiently interpreting medical images with vital clinicrelevant information. Firstly, we reframe the problem of MedVQA as a generation task that naturally follows the humanmachine interaction, we propose a generativebased model for medical visual understanding by aligning visual information from a pretrained vision encoder with a large language model. Secondly, we establish a scalable pipeline to construct a largescale medical visual questionanswering dataset, named PMCVQA, which contains 227k VQA pairs of 149k images that cover various modalities or diseases. Thirdly, we pretrain our proposed model on PMCVQA and then finetune it on multiple public benchmarks, e.g., VQARAD and SLAKE, outperforming existing work by a large margin. Additionally, we propose a test set that has undergone manual verification, which is significantly more challenging, even the best models struggle to solve.
Vaxformer Antigenicitycontrolled Transformer for Vaccine Design Against SARSCoV2 ; The SARSCoV2 pandemic has emphasised the importance of developing a universal vaccine that can protect against current and future variants of the virus. The present study proposes a novel conditional protein Language Model architecture, called Vaxformer, which is designed to produce naturallooking antigenicitycontrolled SARSCoV2 spike proteins. We evaluate the generated protein sequences of the Vaxformer model using DDGun protein stability measure, netMHCpan antigenicity score, and a structure fidelity score with AlphaFold to gauge its viability for vaccine development. Our results show that Vaxformer outperforms the existing stateoftheart Conditional Variational Autoencoder model to generate antigenicitycontrolled SARSCoV2 spike proteins. These findings suggest promising opportunities for conditional Transformer models to expand our understanding of vaccine design and their role in mitigating global health challenges. The code used in this study is available at httpsgithub.comaryopgvaxformer .
Curve Your Enthusiasm Concurvity Regularization in Differentiable Generalized Additive Models ; Generalized Additive Models GAMs have recently experienced a resurgence in popularity due to their interpretability, which arises from expressing the target value as a sum of nonlinear transformations of the features. Despite the current enthusiasm for GAMs, their susceptibility to concurvity i.e., possibly nonlinear dependencies between the features has hitherto been largely overlooked. Here, we demonstrate how concurvity can severly impair the interpretability of GAMs and propose a remedy a conceptually simple, yet effective regularizer which penalizes pairwise correlations of the nonlinearly transformed feature variables. This procedure is applicable to any differentiable additive model, such as Neural Additive Models or NeuralProphet, and enhances interpretability by eliminating ambiguities due to selfcanceling feature contributions. We validate the effectiveness of our regularizer in experiments on synthetic as well as realworld datasets for timeseries and tabular data. Our experiments show that concurvity in GAMs can be reduced without significantly compromising prediction quality, improving interpretability and reducing variance in the feature importances.
Are Your Explanations Reliable Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation ; Local Surrogate models have increased in popularity for use in explaining complex blackbox models for diverse types of data, including text, tabular, and image. One particular algorithm, LIME, continues to see use within the field of machine learning due to its inherently interpretable explanations and modelagnostic behavior. But despite continued use, questions about the stability of LIME persist. Stability, a property where similar instances result in similar explanations, has been shown to be lacking in explanations generated for tabular and image data, both of which are continuous domains. Here we explore the stability of LIME's explanations generated on textual data and confirm the trend of instability shown in previous research for other data types.
The CLIP Model is Secretly an ImagetoPrompt Converter ; The Stable Diffusion model is a prominent texttoimage generation model that relies on a text prompt as its input, which is encoded using the Contrastive LanguageImage PreTraining CLIP. However, text prompts have limitations when it comes to incorporating implicit information from reference images. Existing methods have attempted to address this limitation by employing expensive training procedures involving millions of training samples for imagetoimage generation. In contrast, this paper demonstrates that the CLIP model, as utilized in Stable Diffusion, inherently possesses the ability to instantaneously convert images into text prompts. Such an imagetoprompt conversion can be achieved by utilizing a linear projection matrix that is calculated in a closed form. Moreover, the paper showcases that this capability can be further enhanced by either utilizing a small amount of similardomain training data approximately 100 images or incorporating several online training steps around 30 iterations on the reference images. By leveraging these approaches, the proposed method offers a simple and flexible solution to bridge the gap between images and text prompts. This methodology can be applied to various tasks such as image variation and image editing, facilitating more effective and seamless interaction between images and textual prompts.
An Abstract Specification of VoxML as an Annotation Language ; VoxML is a modeling language used to map natural language expressions into realtime visualizations using commonsense semantic knowledge of objects and events. Its utility has been demonstrated in embodied simulation environments and in agentobject interactions in situated multimodal humanagent collaboration and communication. It introduces the notion of object affordance both Gibsonian and Telic from HRI and robotics, as well as the concept of habitat an object's context of use for interactions between a rational agent and an object. This paper aims to specify VoxML as an annotation language in general abstract terms. It then shows how it works on annotating linguistic data that express visually perceptible humanobject interactions. The annotation structures thus generated will be interpreted against the enriched minimal model created by VoxML as a modeling language while supporting the modeling purposes of VoxML linguistically.
GQA Training Generalized MultiQuery Transformer Models from MultiHead Checkpoints ; Multiquery attention MQA, which only uses a single keyvalue head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We 1 propose a recipe for uptraining existing multihead language model checkpoints into models with MQA using 5 of original pretraining compute, and 2 introduce groupedquery attention GQA, a generalization of multiquery attention which uses an intermediate more than one, less than number of query heads number of keyvalue heads. We show that uptrained GQA achieves quality close to multihead attention with comparable speed to MQA.
Let's Think Frame by Frame Evaluating Video Chain of Thought with Video Infilling and Prediction ; Despite constituting 65 of all internet traffic in 2023, video content is underrepresented in generative AI research. Meanwhile, recent large language models LLMs have become increasingly integrated with capabilities in the visual modality. Integrating video with LLMs is a natural next step, so how can this gap be bridged To advance video reasoning, we propose a new research direction of VideoCOT on video keyframes, which leverages the multimodal generative abilities of visionlanguage models to enhance video reasoning while reducing the computational complexity of processing hundreds or thousands of frames. We introduce VIP, an inferencetime dataset that can be used to evaluate VideoCOT, containing 1 a variety of reallife videos with keyframes and corresponding unstructured and structured scene descriptions, and 2 two new video reasoning tasks video infilling and scene prediction. We benchmark various visionlanguage models on VIP, demonstrating the potential to use visionlanguage models and LLMs to enhance video chain of thought reasoning.
To Copy Rather Than Memorize A Vertical Learning Paradigm for Knowledge Graph Completion ; Embedding models have shown great power in knowledge graph completion KGC task. By learning structural constraints for each training triple, these methods implicitly memorize intrinsic relation rules to infer missing links. However, this paper points out that the multihop relation rules are hard to be reliably memorized due to the inherent deficiencies of such implicit memorization strategy, making embedding models underperform in predicting links between distant entity pairs. To alleviate this problem, we present Vertical Learning Paradigm VLP, which extends embedding models by allowing to explicitly copy target information from related factual triples for more accurate prediction. Rather than solely relying on the implicit memory, VLP directly provides additional cues to improve the generalization ability of embedding models, especially making the distant link prediction significantly easier. Moreover, we also propose a novel relative distance based negative sampling technique ReD for more effective optimization. Experiments demonstrate the validity and generality of our proposals on two standard benchmarks. Our code is available at httpsgithub.comrui9812VLP.
Goat Finetuned LLaMA Outperforms GPT4 on Arithmetic Tasks ; We introduce Goat, a finetuned LLaMA model that significantly outperforms GPT4 on a range of arithmetic tasks. Finetuned on a synthetically generated dataset, Goat achieves stateoftheart performance on BIGbench arithmetic subtask. In particular, the zeroshot Goat7B matches or even surpasses the accuracy achieved by the fewshot PaLM540B. Surprisingly, Goat can achieve nearperfect accuracy on largenumber addition and subtraction through supervised finetuning only, which is almost impossible with previous pretrained language models, such as Bloom, OPT, GPTNeoX, etc. We attribute Goat's exceptional performance to LLaMA's consistent tokenization of numbers. To tackle more challenging tasks like largenumber multiplication and division, we propose an approach that classifies tasks based on their learnability, and subsequently decomposes unlearnable tasks, such as multidigit multiplication and division, into a series of learnable tasks by leveraging basic arithmetic principles. We thoroughly examine the performance of our model, offering a comprehensive evaluation of the effectiveness of our proposed decomposition steps. Additionally, Goat7B can be easily trained using LoRA on a 24GB VRAM GPU, facilitating reproducibility for other researchers. We release our model, dataset, and the Python script for dataset generation.
Evaluation of the MACE Force Field Architecture from Medicinal Chemistry to Materials Science ; The MACE architecture represents the state of the art in the field of machine learning force fields for a variety of indomain, extrapolation and lowdata regime tasks. In this paper, we further evaluate MACE by fitting models for published benchmark datasets. We show that MACE generally outperforms alternatives for a wide range of systems from amorphous carbon, universal materials modelling, and general small molecule organic chemistry to large molecules and liquid water. We demonstrate the capabilities of the model on tasks ranging from constrained geometry optimisation to molecular dynamics simulations and find excellent performance across all tested domains. We show that MACE is very data efficient, and can reproduce experimental molecular vibrational spectra when trained on as few as 50 randomly selected reference configurations. We further demonstrate that the strictly local atomcentered model is sufficient for such tasks even in the case of large molecules and weakly interacting molecular assemblies.
WebIE Faithful and Robust Information Extraction on the Web ; Extracting structured and grounded fact triples from raw text is a fundamental task in Information Extraction IE. Existing IE datasets are typically collected from Wikipedia articles, using hyperlinks to link entities to the Wikidata knowledge base. However, models trained only on Wikipedia have limitations when applied to web domains, which often contain noisy text or text that does not have any factual information. We present WebIE, the first largescale, entitylinked closed IE dataset consisting of 1.6M sentences automatically collected from the English Common Crawl corpus. WebIE also includes negative examples, i.e. sentences without fact triples, to better reflect the data on the web. We annotate 21K triples from WebIE through crowdsourcing and introduce mWebIE, a translation of the annotated set in four other languages French, Spanish, Portuguese, and Hindi. We evaluate the indomain, outofdomain, and zeroshot crosslingual performance of generative IE models and find models trained on WebIE show better generalisability. We also propose three training strategies that use entity linking as an auxiliary task. Our experiments show that adding EntityLinking objectives improves the faithfulness of our generative IE models.
Learning Semantic Role Labeling from Compatible Label Sequences ; This paper addresses the question of how to efficiently learn from disjoint, compatible label sequences. We argue that the compatible structures between disjoint label sets help model learning and inference. We verify this hypothesis on the task of semantic role labeling SRL, specifically, tagging a sentence with two role sequences VerbNet arguments and PropBank arguments. Prior work has shown that crosstask interaction improves performance. However, the two tasks are still separately decoded, running the risk of generating structurally inconsistent label sequences as per lexicons like SEMLINK. To eliminate this issue, we first propose a simple and effective setup that jointly handles VerbNet and PropBank labels as one sequence. With this setup, we show that enforcing SEMLINK constraints during decoding constantly improves the overall F1. With special input constructions, our joint model infers VerbNet arguments from PropBank arguments with over 99 accuracy. We also propose a constrained marginal model that uses SEMLINK information during training to further benefit from the large amounts of PropBankonly data. Our models achieve stateoftheart F1's on VerbNet and PropBank argument labeling on the CoNLL05 dataset with strong outofdomain generalization.
Building Transportation Foundation Model via Generative Graph Transformer ; Efficient traffic management is crucial for maintaining urban mobility, especially in densely populated areas where congestion, accidents, and delays can lead to frustrating and expensive commutes. However, existing prediction methods face challenges in terms of optimizing a single objective and understanding the complex composition of the transportation system. Moreover, they lack the ability to understand the macroscopic system and cannot efficiently utilize big data. In this paper, we propose a novel approach, Transportation Foundation Model TFM, which integrates the principles of traffic simulation into traffic prediction. TFM uses graph structures and dynamic graph generation algorithms to capture the participatory behavior and interaction of transportation system actors. This datadriven and modelfree simulation method addresses the challenges faced by traditional systems in terms of structural complexity and model accuracy and provides a foundation for solving complex transportation problems with real data. The proposed approach shows promising results in accurately predicting traffic outcomes in an urban transportation setting.
SelfICL ZeroShot InContext Learning with SelfGenerated Demonstrations ; Large language models LMs have exhibited superior incontext learning ICL ability to adopt to target tasks by prompting with a few inputoutput demonstrations. Towards better ICL, different methods are proposed to select representative demonstrations from existing training corpora. However, such a setting is not aligned with realworld practices, as endusers usually query LMs without accesses to demonstration pools. Inspired by evidence suggesting LMs' zeroshot capabilities are underrated, and the role of demonstrations are primarily for exposing models' intrinsic functionalities, we introduce SelfICL, a simple framework for zeroshot ICL. Given a test input, SelfICL first prompts the model to generate pseudoinputs. Next, the model predicts pseudolabels for the pseudoinputs via zeroshot prompting. Finally, we construct pseudodemonstrations from pseudoinputlabel pairs, and perform ICL for the test input. Evaluation on BIGBench Hard shows SelfICL steadily surpasses zeroshot and zeroshot chainofthought baselines on headtohead and alltask average performance. Our findings suggest the possibility to bootstrap LMs' intrinsic capabilities towards better zeroshot performance.
Unpaired ImagetoImage Translation via Neural Schrodinger Bridge ; Diffusion models are a powerful class of generative models which simulate stochastic differential equations SDEs to generate data from noise. Although diffusion models have achieved remarkable progress in recent years, they have limitations in the unpaired imagetoimage translation tasks due to the Gaussian prior assumption. Schrodinger Bridge SB, which learns an SDE to translate between two arbitrary distributions, have risen as an attractive solution to this problem. However, none of SB models so far have been successful at unpaired translation between highresolution images. In this work, we propose the Unpaired Neural Schrodinger Bridge UNSB, which combines SB with adversarial training and regularization to learn a SB between unpaired data. We demonstrate that UNSB is scalable, and that it successfully solves various unpaired imagetoimage translation tasks. Code urlhttpsgithub.comcyclomonUNSB
Promoting Generalization in CrossDataset Remote Photoplethysmography ; Remote Photoplethysmography rPPG, or the remote monitoring of a subject's heart rate using a camera, has seen a shift from handcrafted techniques to deep learning models. While current solutions offer substantial performance gains, we show that these models tend to learn a bias to pulse wave features inherent to the training dataset. We develop augmentations to mitigate this learned bias by expanding both the range and variability of heart rates that the model sees while training, resulting in improved model convergence when training and crossdataset generalization at test time. Through a 3way cross dataset analysis we demonstrate a reduction in mean absolute error from over 13 beats per minute to below 3 beats per minute. We compare our method with other recent rPPG systems, finding similar performance under a variety of evaluation parameters.
SelfEvolution Learning for Discriminative Language Model Pretraining ; Masked language modeling, widely used in discriminative language model e.g., BERT pretraining, commonly adopts a random masking strategy. However, random masking does not consider the importance of the different words in the sentence meaning, where some of them are more worthy to be predicted. Therefore, various masking strategies e.g., entitylevel masking are proposed, but most of them require expensive prior knowledge and generally train from scratch without reusing existing model weights. In this paper, we present SelfEvolution learning SE, a simple and effective token masking and learning method to fully and wisely exploit the knowledge from data. SE focuses on learning the informative yet underexplored tokens and adaptively regularizes the training by introducing a novel Tokenspecific Label Smoothing approach. Experiments on 10 tasks show that our SE brings consistent and significant improvements 1.432.12 average scores upon different PLMs. Indepth analyses demonstrate that SE improves linguistic knowledge learning and generalization.
Balancing Effect of Training Dataset Distribution of Multiple Styles for MultiStyle Text Transfer ; Text style transfer is an exciting task within the field of natural language generation that is often plagued by the need for highquality paired datasets. Furthermore, training a model for multiattribute text style transfer requires datasets with sufficient support across all combinations of the considered stylistic attributes, adding to the challenges of training a style transfer model. This paper explores the impact of training data input diversity on the quality of the generated text from the multistyle transfer model. We construct a pseudoparallel dataset by devising heuristics to adjust the style distribution in the training samples. We balance our training dataset using marginal and joint distributions to train our style transfer models. We observe that a balanced dataset produces more effective control effects over multiple styles than an imbalanced or skewed one. Through quantitative analysis, we explore the impact of multiple style distributions in training data on styletransferred output. These findings will better inform the design of styletransfer datasets.
Constraining the chameleonphoton coupling with atomic spectroscopy ; We compute bounds from atomic spectroscopy on chameleon fields that couple to the photon. Chameleons are a wide class of scalar field models that generically lead to screened fifth forces and a host of novel phenomenologies, particularly when the photon coupling is included. We account for perturbations to the atomic energy levels from both the scalar field fifth force and the scalar field's correction to the electric field. We also account for the electromagnetic interaction's contribution to the scalar charge of the proton, which enables a considerably wider class of models to be tested than without this effect. We find bounds that cover different areas of chameleon parameter space. Some regions are redundant with existing experiments, particularly g 2, confirming that those models are ruled out. Other regions were previously unconstrained, and a range of models spanning approximately four orders of magnitude in chameleon coupling parameters are excluded for the first time.
Identification in Some Discrete Choice Models A Computational Approach ; This paper presents an algorithm that generates the conditional moment inequalities that characterize the identified set of the common parameter of various semiparametric panel multinomial choice models. I consider both static and dynamic models, and consider various weak stochastic restrictions on the distribution of observed and unobserved components of the models. For a broad class of such stochastic restrictions, the paper demonstrates that the inequalities characterizing the identified set of the common parameter can be obtained as solutions of multiple objective linear programs MOLPs, thereby transforming the task of finding these inequalities into a purely computational problem. The algorithm that I provide reproduces many wellknown results, including the conditional moment inequalities derived in Manski 1987, Pakes and Porter 2023, and Khan, Ponomareva, and Tamer 2023. Moreover, I use the algorithm to generate some new results, by providing characterizations of the identified set in some cases that were left open in Pakes and Porter 2023 and Khan, Ponomareva, and Tamer 2023, as well as characterizations of the identified set under alternative stochastic restrictions.
Fascinating Supervisory Signals and Where to Find Them Deep Anomaly Detection with Scale Learning ; Due to the unsupervised nature of anomaly detection, the key to fueling deep models is finding supervisory signals. Different from current reconstructionguided generative models and transformationbased contrastive models, we devise novel datadriven supervision for tabular data by introducing a characteristic scale as data labels. By representing varied subvectors of data instances, we define scale as the relationship between the dimensionality of original subvectors and that of representations. Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training. This paper further proposes a scale learningbased anomaly detection method. Supervised by the learning objective of scale distribution alignment, our approach learns the ranking of representations converted from varied subspaces of each data instance. Through this proxy task, our approach models inherent regularities and patterns within data, which well describes data normality. Abnormal degrees of testing instances are obtained by measuring whether they fit these learned patterns. Extensive experiments show that our approach leads to significant improvement over stateoftheart generativecontrastive anomaly detection methods.
Hidden variables, free choice, contextindependence, and all that ; This paper provides a systematic account of the hidden variable models HVMs formulated to describe systems of random variables with mutually exclusive contexts. Any such system can be described either by a model with free choice but generally contextdependent mapping of the hidden variables into observable ones, or by a model with contextindependent mapping but generally compromised free choice. These two types of HVMs are equivalent, one can always be translated into another. They are also unfalsifiable, applicable to all possible systems. These facts, the equivalence and unfalsifiability, imply that freedom of choice and contextindependent mapping are no assumptions at all, and they tell us nothing about freedom of choice or physical influences exerted by contexts as these notions would be understood in science and philosophy. The conjunction of these two notions, however, defines a falsifiable HVM that describes noncontextuality when applied to systems with no disturbance or to consistifications of arbitrary systems. This HVM is most adequately captured by the term contextirrelevance, meaning that no distribution in the model changes with context.
SimHaze game engine simulated data for realworld dehazing ; Deep models have demonstrated recent success in singleimage dehazing. Most prior methods consider fully supervised training and learn from paired clean and hazy images, where a hazy image is synthesized based on a clean image and its estimated depth map. This paradigm, however, can produce lowquality hazy images due to inaccurate depth estimation, resulting in poor generalization of the trained models. In this paper, we explore an alternative approach for generating paired cleanhazy images by leveraging computer graphics. Using a modern game engine, our approach renders crisp clean images and their precise depth maps, based on which highquality hazy images can be synthesized for training dehazing models. To this end, we present SimHaze a new synthetic haze dataset. More importantly, we show that training with SimHaze alone allows the latest dehazing models to achieve significantly better performance in comparison to previous dehazing datasets. Our dataset and code will be made publicly available.
Diverse and Expressive Speech Prosody Prediction with Denoising Diffusion Probabilistic Model ; Expressive human speech generally abounds with rich and flexible speech prosody variations. The speech prosody predictors in existing expressive speech synthesis methods mostly produce deterministic predictions, which are learned by directly minimizing the norm of prosody prediction error. Its unimodal nature leads to a mismatch with ground truth distribution and harms the model's ability in making diverse predictions. Thus, we propose a novel prosody predictor based on the denoising diffusion probabilistic model to take advantage of its highquality generative modeling and training stability. Experiment results confirm that the proposed prosody predictor outperforms the deterministic baseline on both the expressiveness and diversity of prediction results with even fewer network parameters.
Modeling Adversarial Attack on Pretrained Language Models as Sequential Decision Making ; Pretrained language models PLMs have been widely used to underpin various downstream tasks. However, the adversarial attack task has found that PLMs are vulnerable to small perturbations. Mainstream methods adopt a detached twostage framework to attack without considering the subsequent influence of substitution at each step. In this paper, we formally model the adversarial attack task on PLMs as a sequential decisionmaking problem, where the whole attack process is sequential with two decisionmaking problems, i.e., word finder and word substitution. Considering the attack process can only receive the final state without any direct intermediate signals, we propose to use reinforcement learning to find an appropriate sequential attack path to generate adversaries, named SDMAttack. Extensive experimental results show that SDMAttack achieves the highest attack success rate with a comparable modification rate and semantic similarity to attack finetuned BERT. Furthermore, our analyses demonstrate the generalization and transferability of SDMAttack. The code is available at httpsgithub.comfduxuanSDMAttack.
Visual Affordance Prediction for Guiding Robot Exploration ; Motivated by the intuitive understanding humans have about the space of possible interactions, and the ease with which they can generalize this understanding to previously unseen scenes, we develop an approach for learning visual affordances for guiding robot exploration. Given an input image of a scene, we infer a distribution over plausible future states that can be achieved via interactions with it. We use a Transformerbased model to learn a conditional distribution in the latent embedding space of a VQVAE and show that these models can be trained using largescale and diverse passive data, and that the learned models exhibit compositional generalization to diverse objects beyond the training distribution. We show how the trained affordance model can be used for guiding exploration by acting as a goalsampling distribution, during visual goalconditioned policy learning in robotic manipulation.
Fewshot Classincremental Audio Classification Using Adaptivelyrefined Prototypes ; New classes of sounds constantly emerge with a few samples, making it challenging for models to adapt to dynamic acoustic environments. This challenge motivates us to address the new problem of fewshot classincremental audio classification. This study aims to enable a model to continuously recognize new classes of sounds with a few training samples of new classes while remembering the learned ones. To this end, we propose a method to generate discriminative prototypes and use them to expand the model's classifier for recognizing sounds of new and learned classes. The model is first trained with a random episodic training strategy, and then its backbone is used to generate the prototypes. A dynamic relation projection module refines the prototypes to enhance their discriminability. Results on two datasets derived from the corpora of Nsynth and FSDMIXCLIPS show that the proposed method exceeds three stateoftheart methods in average accuracy and performance dropping rate.
Transformer Language Models Handle Word Frequency in Prediction Head ; Prediction head is a crucial component of Transformer language models. Despite its direct impact on prediction, this component has often been overlooked in analyzing Transformers. In this study, we investigate the inner workings of the prediction head, specifically focusing on bias parameters. Our experiments with BERT and GPT2 models reveal that the biases in their word prediction heads play a significant role in the models' ability to reflect word frequency in a corpus, aligning with the logit adjustment method commonly used in longtailed learning. We also quantify the effect of controlling the biases in practical autoregressive text generation scenarios; under a particular setting, more diverse text can be generated without compromising text quality.
GPT4Tools Teaching Large Language Model to Use Tools via Selfinstruction ; This paper aims to efficiently enable Large Language Models LLMs to use multimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT4, have shown great potential for tool usage through sophisticated prompt engineering. Nevertheless, these models typically rely on prohibitive computational costs and publicly inaccessible data. To address these challenges, we propose the GPT4Tools based on selfinstruct to enable opensource LLMs, such as LLaMA and OPT, to use tools. It generates an instructionfollowing dataset by prompting an advanced teacher with various multimodal contexts. By using the LowRank Adaptation LoRA optimization, our approach facilitates the opensource LLMs to solve a range of visual problems, including visual comprehension and image generation. Moreover, we provide a benchmark to evaluate the ability of LLMs to use tools, which is performed in both zeroshot and finetuning ways. Extensive experiments demonstrate the effectiveness of our method on various language models, which not only significantly improves the accuracy of invoking seen tools, but also enables the zeroshot capacity for unseen tools. The code and demo are available at httpsgithub.comStevenGroveGPT4Tools.
Dual Transformer Decoder based Features Fusion Network for Automated Audio Captioning ; Automated audio captioning AAC which generates textual descriptions of audio content. Existing AAC models achieve good results but only use the highdimensional representation of the encoder. There is always insufficient information learning of highdimensional methods owing to highdimensional representations having a large amount of information. In this paper, a new encoderdecoder model called the Low and HighDimensional Feature Fusion LHDFF is proposed. LHDFF uses a new PANNs encoder called Residual PANNs RPANNs to fuse low and highdimensional features. Lowdimensional features contain limited information about specific audio scenes. The fusion of low and highdimensional features can improve model performance by repeatedly emphasizing specific audio scene information. To fully exploit the fused features, LHDFF uses a dual transformer decoder structure to generate captions in parallel. Experimental results show that LHDFF outperforms existing audio captioning models.
Generalized Autoregressive Score Trees and Forests ; We propose methods to improve the forecasts from generalized autoregressive score GAS models Creal et. al, 2013; Harvey, 2013 by localizing their parameters using decision trees and random forests. These methods avoid the curse of dimensionality faced by kernelbased approaches, and allow one to draw on information from multiple state variables simultaneously. We apply the new models to four distinct empirical analyses, and in all applications the proposed new methods significantly outperform the baseline GAS model. In our applications to stock return volatility and density prediction, the optimal GAS tree model reveals a leverage effect and a variance risk premium effect. Our study of stockbond dependence finds evidence of a flighttoquality effect in the optimal GAS forest forecasts, while our analysis of highfrequency trade durations uncovers a volumevolatility effect.
Large Language Models Are Not Abstract Reasoners ; Large Language Models have shown tremendous performance on a large variety of natural language processing tasks, ranging from text comprehension to common sense reasoning. However, the mechanisms responsible for this success remain unknown, and it is unclear whether LLMs can achieve humanlike cognitive capabilities or whether these models are still fundamentally limited. Abstract reasoning is a fundamental task for cognition, consisting of finding and applying a general pattern from few data. Evaluating deep neural architectures on this task could give insight into their potential limitations regarding reasoning and their broad generalisation abilities, yet this is currently an underexplored area. In this paper, we perform extensive evaluations of stateoftheart LLMs on abstract reasoning tasks, showing that they achieve very limited performance in contrast with other natural language tasks, and we investigate the reasons for this difference. We apply techniques that have been shown to improve performance on other NLP tasks and show that in most cases their impact on abstract reasoning performance is limited. In the course of this work, we have generated a new benchmark for evaluating language models on abstract reasoning tasks.
Intelligible LiptoSpeech Synthesis with Speech Units ; In this paper, we propose a novel LiptoSpeech synthesis L2S framework, for synthesizing intelligible speech from a silent lip movement video. Specifically, to complement the insufficient supervisory signal of the previous L2S model, we propose to use quantized selfsupervised speech representations, named speech units, as an additional prediction target for the L2S model. Therefore, the proposed L2S model is trained to generate multiple targets, melspectrogram and speech units. As the speech units are discrete while melspectrogram is continuous, the proposed multitarget L2S model can be trained with strong content supervision, without using textlabeled data. Moreover, to accurately convert the synthesized melspectrogram into a waveform, we introduce a multiinput vocoder that can generate a clear waveform even from blurry and noisy melspectrogram by referring to the speech units. Extensive experimental results confirm the effectiveness of the proposed method in L2S.
Conditionally Strongly LogConcave Generative Models ; There is a growing gap between the impressive results of deep image generative models and classical algorithms that offer theoretical guarantees. The former suffer from mode collapse or memorization issues, limiting their application to scientific data. The latter require restrictive assumptions such as logconcavity to escape the curse of dimensionality. We partially bridge this gap by introducing conditionally strongly logconcave CSLC models, which factorize the data distribution into a product of conditional probability distributions that are strongly logconcave. This factorization is obtained with orthogonal projectors adapted to the data distribution. It leads to efficient parameter estimation and sampling algorithms, with theoretical guarantees, although the data distribution is not globally logconcave. We show that several challenging multiscale processes are conditionally logconcave using wavelet packet orthogonal projectors. Numerical results are shown for physical fields such as the varphi4 model and weak lensing convergence maps with higher resolution than in previous works.
Representation Reliability and Its Impact on Downstream Tasks ; Selfsupervised pretrained models extract generalpurpose representations from data, and quantifying how reliable they are is crucial because many downstream models use these representations as input for their own tasks. To this end, we first introduce a formal definition of representation reliability the representation for a given test input is considered to be reliable if the downstream models built on top of that representation can consistently generate accurate predictions for that test point. It is desired to estimate the representation reliability without knowing the downstream tasks a priori. We provide a negative result showing that existing frameworks for uncertainty quantification in supervised learning are not suitable for this purpose. As an alternative, we propose an ensemblebased method for quantifying representation reliability, based on the concept of neighborhood consistency in the representation spaces across various pretrained models. More specifically, the key insight is to use shared neighboring points as anchors to align different representation spaces. We demonstrate through comprehensive numerical experiments that our method is capable of predicting representation reliability with high accuracy.
Random advectiondiffusion models and their statistics ; We study the statistics of a onedimensional randomly advected field with diffusion. The motivation for this setup comes from a straightforward interpretation as advection of particles in onedimensional turbulence, but it is also related to a problem of synchronization of dynamical systems driven by common noise. A general class of lattice models describing the joint effect of random advection and diffusion for an ensemble of particles is introduced. It consists of a general microscopic random update rule and encompasses as specific cases, some models studied in the literature, like the KangRedner, KipnisMarchioroPresutti, TakatasuTaguchi etc. For finite lattices, we study both the coagulation of an initially spread field interpreted as roughening, and the statistical steadystate properties. We distinguish two main sizedependent regimes, depending on the strength of the advection term and on the lattice size. Using numerical simulations and meanfield approach, we study the statistics of the field. For weak diffusion, we unveil a characteristic hierarchical structure of the field. We also connect the model and the iterated function systems concept
Revisiting Event Argument Extraction Can EAE Models Learn Better When Being Aware of Event Cooccurrences ; Event cooccurrences have been proved effective for event extraction EE in previous studies, but have not been considered for event argument extraction EAE recently. In this paper, we try to fill this gap between EE research and EAE research, by highlighting the question that Can EAE models learn better when being aware of event cooccurrences''. To answer this question, we reformulate EAE as a problem of table generation and extend a SOTA promptbased EAE model into a nonautoregressive generation framework, called TabEAE, which is able to extract the arguments of multiple events in parallel. Under this framework, we experiment with 3 different traininginference schemes on 4 datasets ACE05, RAMS, WikiEvents and MLEE and discover that via training the model to extract all events in parallel, it can better distinguish the semantic boundary of each event and its ability to extract single event gets substantially improved. Experimental results show that our method achieves new stateoftheart performance on the 4 datasets. Our code is avilable at httpsgithub.comStardusthyxTabEAE.
ChatGPT for Zeroshot Dialogue State Tracking A Solution or an Opportunity ; Recent research on dialogue state tracking DST focuses on methods that allow few and zeroshot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and finetuning of ever larger language model based architectures. In contrast, general purpose language models, trained on large amounts of diverse data, hold the promise of solving any kind of task without taskspecific training. We present preliminary experimental results on the ChatGPT research preview, showing that ChatGPT achieves stateoftheart performance in zeroshot DST. Despite our findings, we argue that properties inherent to general purpose models limit their ability to replace specialized systems. We further theorize that the incontext learning capabilities of such models will likely become powerful tools to support the development of dedicated and dynamic dialogue state trackers.
Comparative analysis of the existence and uniqueness conditions of parameter estimation in paired comparison models ; In this paper paired comparison models with stochastic background are investigated. We focus on the models which allow three options for choice and the parameters are estimated by maximum likelihood method. The existence and uniqueness of the estimator is a key issue of the evaluation. In the case of two options, a necessary and sufficient condition is given by Ford in the BradleyTerry model. We generalize this statement for the set of strictly logconcave distribution. Although in the case of three options necessary and sufficient condition is not known, there are two different sufficient conditions which are formulated in the literature. In this paper we generalize them, moreover we compare these conditions. Their capacities to indicate the existence of the maximum are analyzed by a large number of computer simulations. These simulations support that the new condition indicates the existence of the maximum much more frequently then the previously known ones,
Brep Matching for Collaborating Across CAD Systems ; Large ComputerAided Design CAD projects usually require collaboration across many different CAD systems as well as applications that interoperate with them for manufacturing, visualization, or simulation. A fundamental barrier to such collaborations is the ability to refer to parts of the geometry such as a specific face robustly under geometric andor topological changes to the model. Persistent referencing schemes are a fundamental aspect of most CAD tools, but models that are shared across systems cannot generally make use of these internal referencing mechanisms, creating a challenge for collaboration. In this work, we address this issue by developing a novel learningbased algorithm that can automatically find correspondences between two CAD models using the standard representation used for sharing models across CAD systems the BoundaryRepresentation Brep. Because our method works directly on Breps it can be generalized across different CAD applications enabling collaboration.
A theoretical model for power generation via liquid crystal elastomers ; Motivated by the need for new materials and green energy production and conversion processes, a class of mathematical models for liquid crystal elastomers integrated within a theoretical charge pump electrical circuit is considered. The charge pump harnesses the chemical and mechanical properties of liquid crystal elastomers transitioning from the nematic to isotropic phase when illuminated or heated to generate higher voltage from a lower voltage supplied by a battery. For the material constitutive model, purely elastic and neoclassicaltype strain energy densities applicable to a wide range of monodomain nematic elastomers are combined, while elastic and photothermal responses are decoupled to make the investigation analytically tractable. By varying the model parameters of the elastic and neoclassical terms, it is found that liquid crystal elastomers are more effective than rubber when used as dielectric material within a charge pump capacitor.
AutoScrum Automating Project Planning Using Large Language Models ; Recent advancements in the field of large language models have made it possible to use language models for advanced reasoning. In this paper we leverage this ability for designing complex project plans based only on knowing the current state and the desired state. Two approaches are demonstrated a scrum based approach and a shortcut plan approach. The scrum based approach executes an automated process of requirements gathering, user story mapping, feature identification, task decomposition and finally generates questions and search terms for seeking out domain specific information to assist with task completion. The shortcut approach looks at most recent snapshot of the current and desired state and generates the next most reasonable task to do in order to get to the desired state as quickly as possible. In this paper we automate everything using a novel concept of Language Programs. These are programs written in natural language designed to process input data through the language model. Guidance language is used for all LLM programs. All demo source code for this paper is available at httpsgithub.comautoscrumautoscrum
KnowledgeAugmented Language Model Prompting for ZeroShot Knowledge Graph Question Answering ; Large Language Models LLMs are capable of performing zeroshot closedbook question answering tasks, based on their internal knowledge stored in parameters during pretraining. However, such internalized knowledge might be insufficient and incorrect, which could lead LLMs to generate factually wrong answers. Furthermore, finetuning LLMs to update their knowledge is expensive. To this end, we propose to augment the knowledge directly in the input of LLMs. Specifically, we first retrieve the relevant facts to the input question from the knowledge graph based on semantic similarities between the question and its associated facts. After that, we prepend the retrieved facts to the input question in the form of the prompt, which is then forwarded to LLMs to generate the answer. Our framework, KnowledgeAugmented language model PromptING KAPING, requires no model training, thus completely zeroshot. We validate the performance of our KAPING framework on the knowledge graph question answering task, that aims to answer the user's question based on facts over a knowledge graph, on which ours outperforms relevant zeroshot baselines by up to 48 in average, across multiple LLMs of various sizes.
Sliding Window Neural Generated Tracking Based on Measurement Model ; In the pursuit of further advancement in the field of target tracking, this paper explores the efficacy of a feedforward neural network in predicting drones tracks, aiming to eventually, compare the tracks created by the wellknown Kalman filter and the ones created by our proposed neural network. The unique feature of our proposed neural network tracker is that it is using only a measurement model to estimate the next states of the track. Object model selection and linearization is one of the challenges that always face in the tracking process. The neural network uses a sliding window to incorporate the history of measurements when applying estimations of the track values. The testing results are comparable to the ones generated by the Kalman filter, especially for the cases where there is low measurement covariance. The complexity of linearization is avoided when using this proposed model.
Incontext CrossDensity Adaptation on Noisy Mammogram Abnormalities Detection ; This paper investigates the impact of breast density distribution on the generalization performance of deeplearning models on mammography images using the VinDrMammo dataset. We explore the use of domain adaptation techniques, specifically Domain Adaptive Object Detection DAOD with the Noise Latent Transferability Exploration NLTE framework, to improve model performance across breast densities under noisy labeling circumstances. We propose a robust augmentation framework to bridge the domain gap between the source and target inside a dataset. Our results show that DAODbased methods, along with the proposed augmentation framework, can improve the generalization performance of deeplearning models 5 overall mAP improvement approximately in our experimental results compared to commonly used detection models. This paper highlights the importance of domain adaptation techniques in medical imaging, particularly in the context of breast density distribution, which is critical in mammography.
Underwater Acoustic Target Recognition based on Smoothnessinducing Regularization and Spectrogrambased Data Augmentation ; Underwater acoustic target recognition is a challenging task owing to the intricate underwater environments and limited data availability. Insufficient data can hinder the ability of recognition systems to support complex modeling, thus impeding their advancement. To improve the generalization capacity of recognition models, techniques such as data augmentation have been employed to simulate underwater signals and diversify data distribution. However, the complexity of underwater environments can cause the simulated signals to deviate from real scenarios, resulting in biased models that are misguided by nontrue data. In this study, we propose two strategies to enhance the generalization ability of models in the case of limited data while avoiding the risk of performance degradation. First, as an alternative to traditional data augmentation, we utilize smoothnessinducing regularization, which only incorporates simulated signals in the regularization term. Additionally, we propose a specialized spectrogrambased data augmentation strategy, namely local masking and replicating LMR, to capture interclass relationships. Our experiments and visualization analysis demonstrate the superiority of our proposed strategies.
Diffusion Models for BlackBox Optimization ; The goal of offline blackbox optimization BBO is to optimize an expensive blackbox function using a fixed dataset of function evaluations. Prior works consider forward approaches that learn surrogates to the blackbox function and inverse approaches that directly map function values to corresponding points in the input domain of the blackbox function. These approaches are limited by the quality of the offline dataset and the difficulty in learning onetomany mappings in high dimensions, respectively. We propose Denoising Diffusion Optimization Models DDOM, a new inverse approach for offline blackbox optimization based on diffusion models. Given an offline dataset, DDOM learns a conditional generative model over the domain of the blackbox function conditioned on the function values. We investigate several design choices in DDOM, such as reweighting the dataset to focus on high function values and the use of classifierfree guidance at testtime to enable generalization to function values that can even exceed the dataset maxima. Empirically, we conduct experiments on the DesignBench benchmark and show that DDOM achieves results competitive with stateoftheart baselines.
TransCoder Towards Unified Transferable Code Representation Learning Inspired by Human Skills ; Code pretrained models CodePTMs have recently demonstrated a solid capacity to process various software intelligence tasks, e.g., code clone detection, code translation, and code summarization. The current mainstream method that deploys these models to downstream tasks is to finetune them on individual tasks, which is generally costly and needs sufficient data for large models. To tackle the issue, in this paper, we present TransCoder, a unified Transferable finetuning strategy for Code representation learning. Inspired by human inherent skills of knowledge generalization, TransCoder drives the model to learn better coderelated metaknowledge like human programmers. Specifically, we employ a tunable prefix encoder as the metalearner to capture crosstask and crosslanguage transferable knowledge, respectively. Besides, tasks with minor training sample sizes and languages with small corpus can be remarkably benefited from our approach. Extensive experiments conducted on benchmark datasets clearly demonstrate that our method can lead to superior performance on various coderelated tasks and encourage mutual reinforcement. We also show that TransCoder is applicable in lowresource scenarios.
Large Language Models Sometimes Generate Purely NegativelyReinforced Text ; When using adversarial training, it is common practice to train against the most egregious failures. However, this might imply using examples with sensitive information such as leaked passwords or security vulnerabilities as training data. One might assume that language models trained with gradient descent never generate text snippets which were only present in examples associated with the lowest possible reward. In this paper, we show that this assumption is wrong in some situations, large language models do learn from such negativelyreinforced examples. We present a specific training setup that enables Pythia160M to guess passwords 13 more often than it would by guessing randomly, despite only showing it these passwords on examples where the model is incentivized to not output these passwords. Our code is available at www.github.comFabienRogerLearningFromNegativeExamples
Rerender A Video ZeroShot TextGuided VideotoVideo Translation ; Large texttoimage diffusion models have exhibited impressive proficiency in generating highquality images. However, when applying these models to video domain, ensuring temporal consistency across video frames remains a formidable challenge. This paper proposes a novel zeroshot textguided videotovideo translation framework to adapt image models to videos. The framework includes two parts key frame translation and full video translation. The first part uses an adapted diffusion model to generate key frames, with hierarchical crossframe constraints applied to enforce coherence in shapes, textures and colors. The second part propagates the key frames to other frames with temporalaware patch matching and frame blending. Our framework achieves global style and local texture temporal consistency at a low cost without retraining or optimization. The adaptation is compatible with existing image diffusion techniques, allowing our framework to take advantage of them, such as customizing a specific subject with LoRA, and introducing extra spatial guidance with ControlNet. Extensive experimental results demonstrate the effectiveness of our proposed framework over existing methods in rendering highquality and temporallycoherent videos.
Domain Information Control at Inference Time for Acoustic Scene Classification ; Domain shift is considered a challenge in machine learning as it causes significant degradation of model performance. In the Acoustic Scene Classification task ASC, domain shift is mainly caused by different recording devices. Several studies have already targeted domain generalization to improve the performance of ASC models on unseen domains, such as new devices. Recently, the Controllable Gate Adapter ConGater has been proposed in Natural Language Processing to address the biased training data problem. ConGater allows controlling the debiasing process at inference time. ConGater's main advantage is the continuous and selective debiasing of a trained model, during inference. In this work, we adapt ConGater to the audio spectrogram transformer for an acoustic scene classification task. We show that ConGater can be used to selectively adapt the learned representations to be invariant to device domain shifts such as recording devices. Our analysis shows that ConGater can progressively remove device information from the learned representations and improve the model generalization, especially under domain shift conditions e.g. unseen devices. We show that information removal can be extended to both device and location domain. Finally, we demonstrate ConGater's ability to enhance specific device performance without further training.
Old Data, New Forensics The First Second of SN 1987A Neutrino Emission ; The next Milky Way supernova will be an epochal event in multimessenger astronomy, critical to tests of supernovae, neutrinos, and new physics. Realizing this potential depends on having realistic simulations of core collapse. We investigate the neutrino predictions of nearly all modern models 1, 2, and 3d over the first simeq1 s, making the first detailed comparisons of these models to each other and to the SN 1987A neutrino data. Even with different methods and inputs, the models generally agree with each other. However, even considering the low neutrino counts, the models generally disagree with data. What can cause this We show that neither neutrino oscillations nor different progenitor masses appear to be a sufficient solution. We outline urgently needed work.
Stability analysis of warm quintessential dark energy model ; A dynamical system analysis is performed for a model of dissipative quintessential inflation realizing warm inflation at early primordial times and dissipative interations in the dark sector at late times. The construction makes use of a generalized exponential potential realizing both phases of accelerated expansion. A focus is given on the behavior of the dynamical system at late times and the analysis is exemplified by both analytical and numerical results. The results obtained demonstrate the viability of the model as a quintessential inflation model and in which stable solutions can be obtained.
Large Language Models for Telecom The Next Big Thing ; The evolution of generative artificial intelligence GenAI constitutes a turning point in reshaping the future of technology in different aspects. Wireless networks in particular, with the blooming of selfevolving networks, represent a rich field for exploiting GenAI and reaping several benefits that can fundamentally change the way how wireless networks are designed and operated nowadays. To be specific, large language models LLMs, a subfield of GenAI, are envisioned to open up a new era of autonomous wireless networks, in which a multimodal large model trained over various Telecom data, can be finetuned to perform several downstream tasks, eliminating the need for dedicated AI models for each task and paving the way for the realization of artificial general intelligence AGIempowered wireless networks. In this article, we aim to unfold the opportunities that can be reaped from integrating LLMs into the Telecom domain. In particular, we aim to put a forwardlooking vision on a new realm of possibilities and applications of LLMs in future wireless networks, defining directions for designing, training, testing, and deploying Telecom LLMs, and reveal insights on the associated theoretical and practical challenges.
Multifrequency probes of 2HDMS dark matter ; The twoHiggsdoublet with additional scalar 2HDMS model is one proposed to account for several anomalies that have persisted and increased in significance over runs 1 and 2 at the Large Hadron Collider LHC. In addition to this, 2HDMS also supplies a potential Dark Matter DM candidate coupling to the Standard Model via the S boson. So far, this model has been difficult to constrain by indirect means. Here we will explore the potential of Omega Centauri, a nearby globular cluster to constrain this interesting DM model. Although such structures are generally considered to be lacking in DM, arguments have been made that this cluster is in fact the relic of a tidally stripped dwarf galaxy. In such a scenario, the DM content would be significant. Combined with its nearness, this would suggest a potential for powerful indirect dark matter signals. We employ both FermiLAT gammaray data, as well as MeerKAT telescope sensitivities to determine the current status of Omega Centauri as a source of indirect constraints on Weakly Interacting Massive Particles WIMPs in a 2HDMS scenario and for general annihilation channels.
FAIR A Causal Framework for Accurately Inferring Judgments Reversals ; Artificial intelligence researchers have made significant advances in legal intelligence in recent years. However, the existing studies have not focused on the important value embedded in judgments reversals, which limits the improvement of the efficiency of legal intelligence. In this paper, we propose a causal Framework for Accurately Inferring case Reversals FAIR, which models the problem of judgments reversals based on real Chinese judgments. We mine the causes of judgments reversals by causal inference methods and inject the obtained causal relationships into the neural network as a priori knowledge. And then, our framework is validated on a challenging dataset as a legal judgment prediction task. The experimental results show that our framework can tap the most critical factors in judgments reversal, and the obtained causal relationships can effectively improve the neural network's performance. In addition, we discuss the generalization ability of large language models for legal intelligence tasks using ChatGPT as an example. Our experiment has found that the generalization ability of large language models still has defects, and mining causal relationships can effectively improve the accuracy and explain ability of model predictions.
Harnessing the Power of Adversarial Prompting and Large Language Models for Robust Hypothesis Generation in Astronomy ; This study investigates the application of Large Language Models LLMs, specifically GPT4, within Astronomy. We employ incontext prompting, supplying the model with up to 1000 papers from the NASA Astrophysics Data System, to explore the extent to which performance can be improved by immersing the model in domainspecific literature. Our findings point towards a substantial boost in hypothesis generation when using incontext prompting, a benefit that is further accentuated by adversarial prompting. We illustrate how adversarial prompting empowers GPT4 to extract essential details from a vast knowledge base to produce meaningful hypotheses, signaling an innovative step towards employing LLMs for scientific research in Astronomy.
Apolitical Intelligence Auditing Delphi's responses on controversial political issues in the US ; As generative language models are deployed in everwider contexts, concerns about their political values have come to the forefront with critique from all parts of the political spectrum that the models are biased and lack neutrality. However, the question of what neutrality is and whether it is desirable remains underexplored. In this paper, I examine neutrality through an audit of Delphi arXiv2110.07574, a large language model designed for crowdsourced ethics. I analyse how Delphi responds to politically controversial questions compared to different US political subgroups. I find that Delphi is poorly calibrated with respect to confidence and exhibits a significant political skew. Based on these results, I examine the question of neutrality from a datafeminist lens, in terms of how notions of neutrality shift power and further marginalise unheard voices. These findings can hopefully contribute to a more reflexive debate about the normative questions of alignment and what role we want generative models to play in society.
Evaluating the Robustness of Texttoimage Diffusion Models against Realworld Attacks ; Texttoimage T2I diffusion models DMs have shown promise in generating highquality images from textual descriptions. The realworld applications of these models require particular attention to their safety and fidelity, but this has not been sufficiently explored. One fundamental question is whether existing T2I DMs are robust against variations over input texts. To answer it, this work provides the first robustness evaluation of T2I DMs against realworld attacks. Unlike prior studies that focus on malicious attacks involving apocryphal alterations to the input texts, we consider an attack space spanned by realistic errors e.g., typo, glyph, phonetic that humans can make, to ensure semantic consistency. Given the inherent randomness of the generation process, we develop novel distributionbased attack objectives to mislead T2I DMs. We perform attacks in a blackbox manner without any knowledge of the model. Extensive experiments demonstrate the effectiveness of our method for attacking popular T2I DMs and simultaneously reveal their nontrivial robustness issues. Moreover, we provide an indepth analysis of our method to show that it is not designed to attack the text encoder in T2I DMs solely.
DiMSam Diffusion Models as Samplers for Task and Motion Planning under Partial Observability ; Task and Motion Planning TAMP approaches are effective at planning longhorizon autonomous robot manipulation. However, it can be difficult to apply them to domains where the environment and its dynamics are not fully known. We propose to overcome these limitations by leveraging deep generative modeling, specifically diffusion models, to learn constraints and samplers that capture these difficulttoengineer aspects of the planning model. These learned samplers are composed and combined within a TAMP solver in order to find action parameter values jointly that satisfy the constraints along a plan. To tractably make predictions for unseen objects in the environment, we define these samplers on lowdimensional learned latent embeddings of changing object state. We evaluate our approach in an articulated object manipulation domain and show how the combination of classical TAMP, generative learning, and latent embeddings enables longhorizon constraintbased reasoning. We also apply the learned sampler in the real world. More details are available at httpssites.google.comviewdimsamtamp
Pruning for Better Domain Generalizability ; In this paper, we investigate whether we could use pruning as a reliable method to boost the generalization ability of the model. We found that existing pruning method like L2 can already offer small improvement on the target domain performance. We further propose a novel pruning scoring method, called DSS, designed not to maintain source accuracy as typical pruning work, but to directly enhance the robustness of the model. We conduct empirical experiments to validate our method and demonstrate that it can be even combined with stateoftheart generalization work like MIROCha et al., 2022 to further boost the performance. On MNIST to MNISTM, we could improve the baseline performance by over 5 points by introducing 60 channel sparsity into the model. On DomainBed benchmark and stateoftheart MIRO, we can further boost its performance by 1 point only by introducing 10 sparsity into the model. Code can be found at httpsgithub.comAlexSunNikPruningforBetterDomainGeneralizability
Individualized Dosing Dynamics via Neural Eigen Decomposition ; Dosing models often use differential equations to model biological dynamics. Neural differential equations in particular can learn to predict the derivative of a process, which permits predictions at irregular points of time. However, this temporal flexibility often comes with a high sensitivity to noise, whereas medical problems often present high noise and limited data. Moreover, medical dosing models must generalize reliably over individual patients and changing treatment policies. To address these challenges, we introduce the Neural Eigen Stochastic Differential Equation algorithm NESDE. NESDE provides individualized modeling using a hypernetwork over patientlevel parameters; generalization to new treatment policies using decoupled control; tunable expressiveness according to the noise level using piecewise linearity; and fast, continuous, closedform prediction using spectral representation. We demonstrate the robustness of NESDE in both synthetic and real medical problems, and use the learned dynamics to publish simulated medical gym environments.
Decompose and Realign Tackling Condition Misalignment in TexttoImage Diffusion Models ; Texttoimage diffusion models have advanced towards more controllable generation via supporting various image conditions e.g., depth map beyond text. However, these models are learned based on the premise of perfect alignment between the text and image conditions. If this alignment is not satisfied, the final output could be either dominated by one condition, or ambiguity may arise, failing to meet user expectations. To address this issue, we present a trainingfree approach called Decompose and Realign'' to further improve the controllability of existing models when provided with partially aligned conditions. The Decompose'' phase separates conditions based on pair relationships, computing scores individually for each pair. This ensures that each pair no longer has conflicting conditions. The Realign'' phase aligns these independently calculated scores via a crossattention mechanism to avoid new conflicts when combing them back. Both qualitative and quantitative results demonstrate the effectiveness of our approach in handling unaligned conditions, which performs favorably against recent methods and more importantly adds flexibility to the controllable image generation process.
Primordial gravitational waves in the nanoHertz regime and PTA data towards solving the GW inverse problem ; In recent years, several pulsar timing array collaborations have reported first hints for a stochastic gravitational wave background at nanoHertz frequencies. Here we elaborate on the possibility that this signal comes from new physics that leads to the generation of a primordial stochastic gravitational wave background. We propose a set of simple but concrete models that can serve as benchmarks for gravitational waves sourced by cosmological phase transitions, domain wall networks, cosmic strings, axion dynamics, or large scalar fluctuations. These models are then confronted with pulsar timing data and with cosmological constraints. With only a limited number of free parameters per model, we are able to identify viable regions of parameter space and also make predictions for future astrophysical and laboratory tests that can help with model identification and discrimination.
Aspects of Cosmology in Symmetric Teleparallel fQ,T Gravity ; This article aims to explore the idea of describing the complete evolution process of the universe with a cosmological model. For this purpose, we work on a recently developed generalized symmetric teleparallel gravity called fQ,T gravity, where Q and T represent the nonmetricity scalar and trace of the energymomentum tensor. We present two fQ,T cosmological models for two different Lagrangian forms of fQ,T and discuss their cosmological parameters. Further, we discuss various energy conditions to check the viability of those models and also discuss the scalar field construction for describing the inflationary scenario in the early times. In addition, we examine the dark energy nature of the presumed cosmological solution by employing statefinder diagnostics and omegaomegaprime phase space analysis. In conclusion, we found that our models can present a complete evolution profile from the early to late times of the universe.
MAEGEBDWinning the CVPR'2023 LOVEUGEBD Challenge ; The Generic Event Boundary Detection GEBD task aims to build a model for segmenting videos into segments by detecting general event boundaries applicable to various classes. In this paper, based on last year's MAEGEBD method, we have improved our model performance on the GEBD task by adjusting the data processing strategy and loss function. Based on last year's approach, we extended the application of pseudolabel to a larger dataset and made many experimental attempts. In addition, we applied focal loss to concentrate more on difficult samples and improved our model performance. Finally, we improved the segmentation alignment strategy used last year, and dynamically adjusted the segmentation alignment method according to the boundary density and duration of the video, so that our model can be more flexible and fully applicable in different situations. With our method, we achieve an F1 score of 86.03 on the KineticsGEBD test set, which is a 0.09 improvement in the F1 score compared to our 2022 KineticsGEBD method.
Reprogramming Audiodriven Talking Face Synthesis into Textdriven ; In this paper, we propose a method to reprogram pretrained audiodriven talking face synthesis models to be able to operate with text inputs. As the audiodriven talking face synthesis model takes speech audio as inputs, in order to generate a talking avatar with the desired speech content, speech recording needs to be performed in advance. However, this is burdensome to record audio for every video to be generated. In order to alleviate this problem, we propose a novel method that embeds input text into the learned audio latent space of the pretrained audiodriven model. To this end, we design a TexttoAudio Embedding Module TAEM which is guided to learn to map a given text input to the audio latent features. Moreover, to model the speaker characteristics lying in the audio features, we propose to inject visual speaker embedding into the TAEM, which is obtained from a single face image. After training, we can synthesize talking face videos with either text or speech audio.
Cartesian institutions with evidence Data and system modelling with diagrammatic constraints and generalized sketches ; Data constraints are fundamental for practical data modelling, and a verifiable conformance of a data instance to a safetycritical constraint satisfaction relation is a cornerstone of safety assurance. Diagrammatic constraints are important as both a theoretical concepts and a practically convenient device. The paper shows that basic formal constraint management can well be developed within a finitely complete category hence the reference to Cartesianity in the title. In the data modelling context, objects of such a category can be thought of as graphs, while their morphisms play two roles of data instances and when being additionally labelled of constraints. Specifically, a generalized sketch S consists of a graph GS and a set of constraints CS declared over GS, and appears as a pattern for typical data schemas in databases, XML, and UML class diagrams. Interoperability of data modelling frameworks and tools based on them very much depends on the laws regulating the transformation of satisfaction relations between data instances and schemas when the schema graph changes then constraints are translated co whereas instances contravariantly. Investigation of this transformation pattern is the main mathematical subject of the paper
A Hybrid System for Systematic Generalization in Simple Arithmetic Problems ; Solving symbolic reasoning problems that require compositionality and systematicity is considered one of the key ingredients of human intelligence. However, symbolic reasoning is still a great challenge for deep learning models, which often cannot generalize the reasoning pattern to outofdistribution test cases. In this work, we propose a hybrid system capable of solving arithmetic problems that require compositional and systematic reasoning over sequences of symbols. The model acquires such a skill by learning appropriate substitution rules, which are applied iteratively to the input string until the expression is completely resolved. We show that the proposed system can accurately solve nested arithmetical expressions even when trained only on a subset including the simplest cases, significantly outperforming both a sequencetosequence model trained endtoend and a stateoftheart large language model.
ClassIncremental Learning using Diffusion Model for Distillation and Replay ; Classincremental learning aims to learn new classes in an incremental fashion without forgetting the previously learned ones. Several research works have shown how additional data can be used by incremental models to help mitigate catastrophic forgetting. In this work, following the recent breakthrough in texttoimage generative models and their wide distribution, we propose the use of a pretrained Stable Diffusion model as a source of additional data for classincremental learning. Compared to competitive methods that rely on external, often unlabeled, datasets of real images, our approach can generate synthetic samples belonging to the same classes as the previously encountered images. This allows us to use those additional data samples not only in the distillation loss but also for replay in the classification loss. Experiments on the competitive benchmarks CIFAR100, ImageNetSubset, and ImageNet demonstrate how this new approach can be used to further improve the performance of stateoftheart methods for classincremental learning on large scale datasets.
DoReMi Grounding Language Model by Detecting and Recovering from PlanExecution Misalignment ; Large language models LLMs encode a vast amount of semantic knowledge and possess remarkable understanding and reasoning capabilities. Previous work has explored how to ground LLMs in robotic tasks to generate feasible and executable textual plans. However, lowlevel execution in the physical world may deviate from the highlevel textual plan due to environmental perturbations or imperfect controller design. In this paper, we propose textbfDoReMi, a novel language model grounding framework that enables immediate Detection and Recovery from Misalignments between plan and execution. Specifically, we leverage LLMs to play a dual role, aiding not only in highlevel planning but also generating constraints that can indicate misalignment during execution. Then vision language models VLMs are utilized to detect constraint violations continuously. Our pipeline can monitor the lowlevel execution and enable timely recovery if certain planexecution misalignment occurs. Experiments on various complex tasks including robot arms and humanoid robots demonstrate that our method can lead to higher task success rates and shorter task completion times. Videos of DoReMi are available at urlhttpssites.google.comviewdoremipaper.
Changepoints analysis for generalized integervalued autoregressive model via minimum description length principle ; This article considers the problem of modeling a class of nonstationary count time series using multiple changepoints generalized integervalued autoregressive MCPGINAR processes. The minimum description length principle MDL is applied to study the statistical inference for the MCPGINAR model, and the consistency results of the MDL model selection procedure are established respectively under the condition of known and unknown number of changepoints. To find the best combination of the number of changepoints, the locations of changepoints, the order of each segment and its parameters, a genetic algorithm with simulated annealing is implemented to solve this difficult optimization problem. In particular, the simulated annealing process makes up for the precocious problem of the traditional genetic algorithm. Numerical results from simulation experiments and three examples of real data analyses show that the procedure has excellent empirical properties.
Squeezing LargeScale Diffusion Models for Mobile ; The emergence of diffusion models has greatly broadened the scope of highfidelity image synthesis, resulting in notable advancements in both practical implementation and academic research. With the active adoption of the model in various realworld applications, the need for ondevice deployment has grown considerably. However, deploying large diffusion models such as Stable Diffusion with more than one billion parameters to mobile devices poses distinctive challenges due to the limited computational and memory resources, which may vary according to the device. In this paper, we present the challenges and solutions for deploying Stable Diffusion on mobile devices with TensorFlow Lite framework, which supports both iOS and Android devices. The resulting Mobile Stable Diffusion achieves the inference latency of smaller than 7 seconds for a 512x512 image generation on Android devices with mobile GPUs.