text
stringlengths
62
2.94k
Domain generalization in deep learning for contrastenhanced imaging ; The domain generalization problem has been widely investigated in deep learning for noncontrast imaging over the last years, but it received limited attention for contrastenhanced imaging. However, there are marked differences in contrast imaging protocols across clinical centers, in particular in the time between contrast injection and image acquisition, while access to multicenter contrastenhanced image data is limited compared to available datasets for noncontrast imaging. This calls for new tools for generalizing singledomain, singlecenter deep learning models across new unseen domains and clinical centers in contrastenhanced imaging. In this paper, we present an exhaustive evaluation of deep learning techniques to achieve generalizability to unseen clinical centers for contrastenhanced image segmentation. To this end, several techniques are investigated, optimized and systematically evaluated, including data augmentation, domain mixing, transfer learning and domain adaptation. To demonstrate the potential of domain generalization for contrastenhanced imaging, the methods are evaluated for ventricular segmentation in contrastenhanced cardiac magnetic resonance imaging MRI. The results are obtained based on a multicenter cardiac contrastenhanced MRI dataset acquired in four hospitals located in three countries France, Spain and China. They show that the combination of data augmentation and transfer learning can lead to singlecenter models that generalize well to new clinical centers not included during training. Singledomain neural networks enriched with suitable generalization procedures can reach and even surpass the performance of multicenter, multivendor models in contrastenhanced imaging, hence eliminating the need for comprehensive multicenter datasets to train generalizable models.
FewShot Bot PromptBased Learning for Dialogue Systems ; Learning to converse using only a few examples is a great challenge in conversational AI. The current best conversational models, which are either good chitchatters e.g., BlenderBot or goaloriented systems e.g., MinTL, are language models LMs finetuned on large conversational datasets. Training these models is expensive, both in terms of computational resources and time, and it is hard to keep them up to date with new conversational skills. A simple yet unexplored solution is promptbased fewshot learning Brown et al. 2020 which does not require gradientbased finetuning but instead uses a few examples in the LM context as the only source of learning. In this paper, we explore promptbased fewshot learning in dialogue tasks. We benchmark LMs of different sizes in nine response generation tasks, which include four knowledgegrounded tasks, a taskoriented generations task, three openchat tasks, and controlled stylistic generation, and five conversational parsing tasks, which include dialogue state tracking, graph path generation, persona information extraction, document retrieval, and internet query generation. The current largest released LM GPTJ6B using promptbased fewshot learning, and thus requiring no training, achieves competitive performance to fully trained stateoftheart models. Moreover, we propose a novel promptbased fewshot classifier, that also does not require any finetuning, to select the most appropriate prompt given a dialogue history. Finally, by combining the power of promptbased fewshot learning and a Skill Selector, we create an endtoend chatbot named the FewShot Bot FSB, which automatically selects the most appropriate conversational skill, queries different knowledge bases or the internet, and uses the retrieved knowledge to generate a humanlike response, all using only few dialogue examples per skill.
SUPA A Lightweight Diagnostic Simulator for Machine Learning in Particle Physics ; Deep learning methods have gained popularity in high energy physics for fast modeling of particle showers in detectors. Detailed simulation frameworks such as the gold standard Geant4 are computationally intensive, and current deep generative architectures work on discretized, lower resolution versions of the detailed simulation. The development of models that work at higher spatial resolutions is currently hindered by the complexity of the full simulation data, and by the lack of simpler, more interpretable benchmarks. Our contribution is SUPA, the SUrrogate PArticle propagation simulator, an algorithm and software package for generating data by simulating simplified particle propagation, scattering and shower development in matter. The generation is extremely fast and easy to use compared to Geant4, but still exhibits the key characteristics and challenges of the detailed simulation. We support this claim experimentally by showing that performance of generative models on data from our simulator reflects the performance on a dataset generated with Geant4. The proposed simulator generates thousands of particle showers per second on a desktop machine, a speed up of up to 6 orders of magnitudes over Geant4, and stores detailed geometric information about the shower propagation. SUPA provides much greater flexibility for setting initial conditions and defining multiple benchmarks for the development of models. Moreover, interpreting particle showers as point clouds creates a connection to geometric machine learning and provides challenging and fundamentally new datasets for the field. The code for SUPA is available at httpsgithub.comitsdanieleSUPA.
On the Effectiveness of Pretrained Models for API Learning ; Developers frequently use APIs to implement certain functionalities, such as parsing Excel Files, reading and writing text files line by line, etc. Developers can greatly benefit from automatic API usage sequence generation based on natural language queries for building applications in a faster and cleaner manner. Existing approaches utilize information retrieval models to search for matching API sequences given a query or use RNNbased encoderdecoder to generate API sequences. As it stands, the first approach treats queries and API names as bags of words. It lacks deep comprehension of the semantics of the queries. The latter approach adapts a neural language model to encode a user query into a fixedlength context vector and generate API sequences from the context vector. We want to understand the effectiveness of recent Pretrained Transformer based Models PTMs for the API learning task. These PTMs are trained on large natural language corpora in an unsupervised manner to retain contextual knowledge about the language and have found success in solving similar Natural Language Processing NLP problems. However, the applicability of PTMs has not yet been explored for the API sequence generation task. We use a dataset that contains 7 million annotations collected from GitHub to evaluate the PTMs empirically. This dataset was also used to assess previous approaches. Based on our results, PTMs generate more accurate API sequences and outperform other related methods by around 11. We have also identified two different tokenization approaches that can contribute to a significant boost in PTMs' performance for the API sequence generation task.
Accelerating Inhibitor Discovery With A Deep Generative Foundation Model Validation for SARSCoV2 Drug Targets ; The discovery of novel inhibitor molecules for emerging drugtarget proteins is widely acknowledged as a challenging inverse design problem Exhaustive exploration of the vast chemical search space is impractical, especially when the target structure or active molecules are unknown. Here we validate experimentally the broad utility of a deep generative framework trained atscale on protein sequences, small molecules, and their mutual interactions that is unbiased toward any specific target. As demonstrators, we consider two dissimilar and relevant SARSCoV2 targets the main protease and the spike protein receptor binding domain, RBD. To perform targetaware design of novel inhibitor molecules, a protein sequenceconditioned sampling on the generative foundation model is performed. Despite using only the target sequence information, and without performing any targetspecific adaptation of the generative model, micromolarlevel inhibition was observed in in vitro experiments for two candidates out of only four synthesized for each target. The most potent spike RBD inhibitor also exhibited activity against several variants in live virus neutralization assays. These results therefore establish that a single, broadly deployable generative foundation model for accelerated hit discovery is effective and efficient, even in the most general case where neither target structure nor binder information is available.
Learning Distinct and Representative Styles for Image Captioning ; Over the years, stateoftheart SoTA image captioning methods have achieved promising results on some evaluation metrics e.g., CIDEr. However, recent findings show that the captions generated by these methods tend to be biased toward the average caption that only captures the most general mode a.k.a, language pattern in the training corpus, i.e., the socalled mode collapse problem. Affected by it, the generated captions are limited in diversity and usually less informative than natural image descriptions made by humans. In this paper, we seek to avoid this problem by proposing a Discrete Mode Learning DML paradigm for image captioning. Our innovative idea is to explore the rich modes in the training caption corpus to learn a set of mode embeddings, and further use them to control the mode of the generated captions for existing image captioning models. Specifically, the proposed DML optimizes a dual architecture that consists of an imageconditioned discrete variational autoencoder CdVAE branch and a modeconditioned image captioning MIC branch. The CdVAE branch maps each image caption to one of the mode embeddings stored in a learned codebook, and is trained with a pure nonautoregressive generation objective to make the modes distinct and representative. The MIC branch can be simply modified from an existing image captioning model, where the mode embedding is added to the original word embeddings as the control signal. In the experiments, we apply the proposed DML to two widely used image captioning models, Transformer and AoANet. The results show that the learned mode embedding successfully facilitates these models to generate highquality image captions with different modes, further leading to better performance for both diversity and quality on the MSCOCO dataset.
The Vendi Score A Diversity Evaluation Metric for Machine Learning ; Diversity is an important criterion for many areas of machine learning ML, including generative modeling and dataset curation. However, existing metrics for measuring diversity are often domainspecific and limited in flexibility. In this paper, we address the diversity evaluation problem by proposing the Vendi Score, which connects and extends ideas from ecology and quantum statistical mechanics to ML. The Vendi Score is defined as the exponential of the Shannon entropy of the eigenvalues of a similarity matrix. This matrix is induced by a userdefined similarity function applied to the sample to be evaluated for diversity. In taking a similarity function as input, the Vendi Score enables its user to specify any desired form of diversity. Importantly, unlike many existing metrics in ML, the Vendi Score does not require a reference dataset or distribution over samples or labels, it is therefore general and applicable to any generative model, decoding algorithm, and dataset from any domain where similarity can be defined. We showcase the Vendi Score on molecular generative modeling where we found it addresses shortcomings of the current diversity metric of choice in that domain. We also applied the Vendi Score to generative models of images and decoding algorithms of text where we found it confirms known results about diversity in those domains. Furthermore, we used the Vendi Score to measure mode collapse, a known shortcoming of generative adversarial networks GANs. In particular, the Vendi Score revealed that even GANs that capture all the modes of a labeled dataset can be less diverse than the original dataset. Finally, the interpretability of the Vendi Score allowed us to diagnose several benchmark ML datasets for diversity, opening the door for diversityinformed data augmentation.
Large Language Models are ZeroShot Fuzzers Fuzzing DeepLearning Libraries via Large Language Models ; Detecting bugs in Deep Learning DL libraries e.g., TensorFlowPyTorch is critical for almost all downstream DL systems in ensuring effectivenesssafety for end users. Meanwhile, traditional fuzzing techniques can be hardly effective for such a challenging domain since the input DL programs need to satisfy both the input language e.g., Python syntaxsemantics and the DL API inputshape constraints for tensor computations. To address these limitations, we propose TitanFuzz the first approach to directly leveraging Large Language Models LLMs to generate input programs for fuzzing DL libraries. LLMs are titanic models trained on billions of code snippets and can autoregressively generate humanlike code snippets. Our key insight is that modern LLMs can also include numerous code snippets invoking DL library APIs in their training corpora, and thus can implicitly learn both language syntaxsemantics and intricate DL API constraints for valid DL program generation. More specifically, we use both generative and infilling LLMs e.g., CodexInCoder to generate and mutate validdiverse input DL programs for fuzzing. Our experimental results demonstrate that TitanFuzz can achieve 30.3850.84 higher code coverage than stateoftheart fuzzers on TensorFlowPyTorch. Furthermore, TitanFuzz is able to detect 65 bugs, with 41 already confirmed as previously unknown bugs. This paper demonstrates that modern titanic LLMs can be leveraged to directly perform both generationbased and mutationbased fuzzing studied for decades, while being fully automated, generalizable, and applicable to domains challenging for traditional approaches such as DL systems. We hope TitanFuzz can stimulate more work in this promising direction of LLMs for fuzzing.
Backreaction from gauge fields produced during inflation ; In this work, we study general features of a regime where gauge fields produced during inflation cause a strong backreaction on the background evolution and its impact on the spectrum and the correlation length of gauge fields. With this aim, the gradientexpansion formalism previously proposed for the description of inflationary magnetogenesis in purely kinetic or purely axial coupling models, is extended to the case when both types of coupling are present. As it is formulated in position space, this method allows us to selfconsistently take into account the backreaction of generated gauge fields on the inflationary background because it captures the nonlinear evolution of all physically relevant gaugefield modes at once. Using this extended gradientexpansion formalism, suitable for a wide range of inflationary magnetogenesis models, we study the gaugefield production in a specific generalization of the Starobinsky R2model with a nonminimal coupling of gauge fields to gravity. In the Einstein frame, this model implies, in addition to an asymptotically flat inflaton potential, also a nontrivial form of kinetic and axial coupling functions which decrease in time and, thus, are potentially suitable for the generation of gauge fields with a scaleinvariant or even redtilted power spectrum. The numerical analysis shows, however, that backreaction, which unavoidably occurs in this model for the interesting range of parameters, strongly alters the behavior of the spectrum and does not allow to obtain a sufficiently large correlation length for the magnetic field. The oscillatory behavior of the generated field, caused by the retarded response of the gauge field to changes of the inflaton velocity, was revealed.
A Complete Survey on Generative AI AIGC Is ChatGPT from GPT4 to GPT5 All You Need ; As ChatGPT goes viral, generative AI AIGC, a.k.a AIgenerated content has made headlines everywhere because of its ability to analyze and create text, images, and beyond. With such overwhelming media coverage, it is almost impossible for us to miss the opportunity to glimpse AIGC from a certain angle. In the era of AI transitioning from pure analysis to creation, it is worth noting that ChatGPT, with its most recent language model GPT4, is just a tool out of numerous AIGC tasks. Impressed by the capability of the ChatGPT, many people are wondering about its limits can GPT5 or other future GPT variants help ChatGPT unify all AIGC tasks for diversified content creation Toward answering this question, a comprehensive review of existing AIGC tasks is needed. As such, our work comes to fill this gap promptly by offering a first look at AIGC, ranging from its techniques to applications. Modern generative AI relies on various technical foundations, ranging from model architecture and selfsupervised pretraining to generative modeling methods like GAN and diffusion models. After introducing the fundamental techniques, this work focuses on the technological development of various AIGC tasks based on their output type, including text, images, videos, 3D content, etc., which depicts the full potential of ChatGPT's future. Moreover, we summarize their significant applications in some mainstream industries, such as education and creativity content. Finally, we discuss the challenges currently faced and present an outlook on how generative AI might evolve in the near future.
Learning to Tokenize for Generative Retrieval ; Conventional document retrieval techniques are mainly based on the indexretrieve paradigm. It is challenging to optimize pipelines based on this paradigm in an endtoend manner. As an alternative, generative retrieval represents documents as identifiers docid and retrieves documents by generating docids, enabling endtoend modeling of document retrieval tasks. However, it is an open question how one should define the document identifiers. Current approaches to the task of defining document identifiers rely on fixed rulebased docids, such as the title of a document or the result of clustering BERT embeddings, which often fail to capture the complete semantic information of a document. We propose GenRet, a document tokenization learning method to address the challenge of defining document identifiers for generative retrieval. GenRet learns to tokenize documents into short discrete representations i.e., docids via a discrete autoencoding approach. Three components are included in GenRet i a tokenization model that produces docids for documents; ii a reconstruction model that learns to reconstruct a document based on a docid; and iii a sequencetosequence retrieval model that generates relevant document identifiers directly for a designated query. By using an autoencoding framework, GenRet learns semantic docids in a fully endtoend manner. We also develop a progressive training scheme to capture the autoregressive nature of docids and to stabilize training. We conduct experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the effectiveness of GenRet. GenRet establishes the new stateoftheart on the NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet can achieve significant improvements on the unseen documents. GenRet also outperforms comparable baselines on MS MARCO and BEIR, demonstrating the method's generalizability.
Exploring the Viability of Synthetic Query Generation for Relevance Prediction ; Querydocument relevance prediction is a critical problem in Information Retrieval systems. This problem has increasingly been tackled using pretrained transformerbased models which are finetuned using large collections of labeled data. However, in specialized domains such as ecommerce and healthcare, the viability of this approach is limited by the dearth of large indomain data. To address this paucity, recent methods leverage these powerful models to generate highquality task and domainspecific synthetic data. Prior work has largely explored synthetic data generation or query generation QGen for QuestionAnswering QA and binary yesno relevance prediction, where for instance, the QGen models are given a document, and trained to generate a query relevant to that document. However in many problems, we have a more finegrained notion of relevance than a simple yesno label. Thus, in this work, we conduct a detailed study into how QGen approaches can be leveraged for nuanced relevance prediction. We demonstrate that contrary to claims from prior works current QGen approaches fall short of the more conventional crossdomain transferlearning approaches. Via empirical studies spanning 3 public ecommerce benchmarks, we identify new shortcomings of existing QGen approaches including their inability to distinguish between different grades of relevance. To address this, we introduce labelconditioned QGen models which incorporates knowledge about the different relevance. While our experiments demonstrate that these modifications help improve performance of QGen techniques, we also find that QGen approaches struggle to capture the full nuance of the relevance label space and as a result the generated queries are not faithful to the desired relevance label.
Passive learning of active causal strategies in agents and language models ; What can be learned about causality and experimentation from passive data This question is salient given recent successes of passivelytrained language models in interactive domains such as tool use. Passive learning is inherently limited. However, we show that purely passive learning can in fact allow an agent to learn generalizable strategies for determining and using causal structures, as long as the agent can intervene at test time. We formally illustrate that learning a strategy of first experimenting, then seeking goals, can allow generalization from passive learning in principle. We then show empirically that agents trained via imitation on expert data can indeed generalize at test time to infer and use causal links which are never present in the training data; these agents can also generalize experimentation strategies to novel variable sets never observed in training. We then show that strategies for causal intervention and exploitation can be generalized from passive data even in a more complex environment with highdimensional observations, with the support of natural language explanations. Explanations can even allow passive learners to generalize outofdistribution from perfectlyconfounded training data. Finally, we show that language models, trained only on passive nextword prediction, can generalize causal intervention strategies from a fewshot prompt containing examples of experimentation, together with explanations and reasoning. These results highlight the surprising power of passive learning of active causal strategies, and may help to understand the behaviors and capabilities of language models.
RAPGen RetrievalAugmented Patch Generation with CodeT5 for Automatic Program Repair ; Automatic program repair APR is crucial to reduce manual debugging efforts for developers and improve software reliability. While conventional searchbased techniques typically rely on heuristic rules or a redundancy assumption to mine fix patterns, recent years have witnessed the surge of deep learning DL based approaches to automate the program repair process in a datadriven manner. However, their performance is often limited by a fixed set of parameters to model the highly complex search space of APR. To ease such burden on the parametric models, in this work, we propose a novel RetrievalAugmented Patch Generation framework RAPGen by explicitly leveraging relevant fix patterns retrieved from a codebase of previous bugfix pairs. Specifically, we build a hybrid patch retriever to account for both lexical and semantic matching based on the raw source code in a languageagnostic manner, which does not rely on any codespecific features. In addition, we adapt a codeaware language model CodeT5 as our foundation model to facilitate both patch retrieval and generation tasks in a unified manner. We adopt a stagewise approach where the patch retriever first retrieves a relevant external bugfix pair to augment the buggy input for the CodeT5 patch generator, which synthesizes a ranked list of repair patch candidates. Notably, RAPGen is a generic APR framework that can flexibly integrate different patch retrievers and generators to repair various types of bugs. We thoroughly evaluate RAPGen on three benchmarks in two programming languages, including the TFix benchmark in JavaScript, and Code Refinement and Defects4J benchmarks in Java, where the bug localization information may or may not be provided. Experimental results show that RAPGen significantly outperforms previous stateoftheart approaches on all benchmarks, e.g., repairing 15 more bugs on 818 Defects4J bugs.
Neuro Symbolic Reasoning for Planning Counterexample Guided Inductive Synthesis using Large Language Models and Satisfiability Solving ; Generative large language models LLMs with instruct training such as GPT4 can follow humanprovided instruction prompts and generate humanlike responses to these prompts. Apart from natural language responses, they have also been found to be effective at generating formal artifacts such as code, plans, and logical specifications from natural language prompts. Despite their remarkably improved accuracy, these models are still known to produce factually incorrect or contextually inappropriate results despite their syntactic coherence a phenomenon often referred to as hallucination. This limitation makes it difficult to use these models to synthesize formal artifacts that are used in safetycritical applications. Unlike tasks such as text summarization and questionanswering, bugs in code, plan, and other formal artifacts produced by LLMs can be catastrophic. We posit that we can use the satisfiability modulo theory SMT solvers as deductive reasoning engines to analyze the generated solutions from the LLMs, produce counterexamples when the solutions are incorrect, and provide that feedback to the LLMs exploiting the dialog capability of instructtrained LLMs. This interaction between inductive LLMs and deductive SMT solvers can iteratively steer the LLM to generate the correct response. In our experiments, we use planning over the domain of blocks as our synthesis task for evaluating our approach. We use GPT4, GPT3.5 Turbo, Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our method allows the user to communicate the planning problem in natural language; even the formulation of queries to SMT solvers is automatically generated from natural language. Thus, the proposed technique can enable nonexpert users to describe their problems in natural language, and the combination of LLMs and SMT solvers can produce provably correct solutions.
Axisymmetric simulations of magnetorotational core collapse Approximate inclusion of general relativistic effects ; We continue our investigations of the magnetorotational collapse of stellar cores discussing simulations performed with a modified Newtonian gravitational potential that mimics general relativistic effects. The approximate TOV potential used in our simulations catches several features of fully relativistic simulations quite well. It is able to correctly reproduce the behavior of models which show a qualitative change both of the dynamics and the gravitational wave signal when switching from Newtonian to fully relativistic simulations. If this is not the case, the Newtonian and the approximate TOV models differ quantitatively. The collapse proceeds to higher densities with the approximate TOV potential allowing for a more efficient amplification of the magnetic field by differential rotation. Sufficiently strong magnetic fields brake down the core's rotation and trigger a contraction phase to higher densities. Several models exhibit two different kinds of shock generation. Due to magnetic braking, a first shock wave created during the initial centrifugal bounce does not suffice to eject any mass, and the core continues to collapse to supranuclear densities. Another stronger shock wave is generated during the second bounce as the core exceeds nuclear matter density. The gravitational wave signal of these models does not fit into the standard classification. Instead it belongs to the signal type IV introduced by us in the first paper of this series. This signal type is more frequent for the approximate relativistic potential than for the Newtonian one. Strongly magnetized models emit a substantial fraction of their GW power at very low frequencies. A flat spectrum between 10 Hz and 100 kHz denotes the generation of a jetlike outflow. Abstract abbreviated
Natural R Parity Conservation with Horizontal Symmetries. a Four Generation Model ; In most supersymmetric models the stability of the proton is ensured by invoking Rparity. A necessary ingredient to enforce Rparity is the possibility of distinguishing the lepton superfields from the Higgs ones. This is generally achieved either by assuming different charges under some matter parity, or by assigning the superfields to different representations of a unified gauge group. We want to put forward the idea that the replica of the fermion generations, which constitute an intrinsic difference between the fermions and the Higgs superfields, can give a clue to understand Rparity as an accidental symmetry. More ambitiously, we suggest a possible relation between proton stability and the actual number of fermion generations. We carry out our investigation in the framework of nonAbelian horizontal gauge symmetries. We identify SU4H as the only acceptable horizontal gauge group whichcan naturally ensure the absence of R parity violating operators, without conflicting with other theoretical and phenomenological constraints. We analyze a version of the supersymmetric standard model equipped with a gauged horizontal SU4H, in which Rparity is accidental. The model predicts four families of fermions, it allows for the dynamical generation of a realistic hierarchy of fermion masseswithout any ad hoc choice of small Yukawa couplings, it ensures in a natural way the heaviness of all the fourth family fermions including the neutrino and it predicts a it lower limit for the tauneutrino mass of a few eV. The scale of the breaking of the horizontal symmetry can be constrained rather precisely in a narrow window around sim 1011 GeV. Some interesting astrophysical and cosmological implications of the model are addressed as well.
Phenomenology of flavormediated supersymmetry breaking ; The phenomenology of a new economical SUSY model that utilizes dynamical SUSY breaking and gaugemediation GM for the generation of the sparticle spectrum and the hierarchy of fermion masses is discussed. Similarities between the communication of SUSY breaking through a messenger sector, and the generation of flavor using the FroggattNielsen FN mechanism are exploited, leading to the identification of vectorlike messenger fields with FN fields, and the messenger U1 as a flavor symmetry. An immediate consequence is that the first and second generation scalars acquire flavordependent masses, but do not violate FCNC bounds since their mass scale, consistent with effective SUSY, is of order 10 TeV. We define and advocate a minimal flavormediated model MFMM, recently introduced in the literature, that successfully accommodates the small flavorbreaking parameters of the standard model using order one couplings and ratios of flavon field vevs. The mediation of SUSY breaking occurs via twoloop logenhanced GM contributions, as well as several oneloop and twoloop Yukawamediated contributions for which we provide analytical expressions. The MFMM is parameterized by a small set of masses and couplings, with values restricted by several model constraints and experimental data. The nexttolightest sparticle NLSP always has a decay length that is larger than the scale of a detector, and is either the lightest stau or the lightest neutralino. Similar to ordinary GM models, the best collider search strategies are, respectively, inclusive production of at least one highly ionizing track, or events with many taus plus missing energy. In addition, D0 barD0 mixing is also a generic low energy signal. Finally, the dynamical generation of the neutrino masses is briefly discussed.
A quasiparticle description of the M3,p models ; The M3,p minimal models are reconsidered from the point of view of the extended algebra whose generators are the energymomentum tensor and the primary field phi2,1 of dimension p24. Within this framework, we provide a quasiparticle description of these models, in which all states are expressed solely in terms of the phi2,1modes. More precisely, we show that all the states can be written in terms of phi2,1type highestweight states and their phi2,1descendants. We further demonstrate that the conformal dimension of these highestweight states can be calculated from the phi2,1 commutation relations, the highestweight conditions and associativity. For the simplest models p5,7, the full spectrum is explicitly reconstructed along these lines. For p odd, the commutation relations between the phi2,1 modes take the form of infinite sums, i.e., of generalized commutation relations akin to parafermionic models. In that case, an unexpected operator, generalizing the Witten index, is unravelled in the OPE of phi2,1 with itself. A quasiparticle basis formulated in terms of the sole phi1,2 modes is studied for all allowed values of p. We argue that it is governed by jaggedtype partitions further subject a difference 2 condition at distance 2. We demonstrate the correctness of this basis by constructing its generating function, from which the proper fermionic expression of the combination of the Virasoro irreducible characters chi1,s and chi1,ps for 1leq sleq p31 are recovered. As an aside, a practical technique for implementing associativity at the level of mode computations is presented, together with a general discussion of the relation between associativity and the Jacobi identities.
An alternative to the plasma emission model ParticleInCell, selfconsistent electromagnetic wave emission simulations of solar type III radio bursts ; 1.5D PIC, relativistic, fully electromagnetic EM simulations are used to model EM wave emission generation in the context of solar type III radio bursts. The model studies generation of EM waves by a superthermal, hot beam of electrons injected into a plasma thread that contains uniform longitudinal magnetic field and a parabolic density gradient. In effect, a single magnetic line connecting Sun to earth is considered, for which several cases are studied. i We find that the physical system without a beam is stable and only low amplitude level EM drift waves noise are excited. ii The beam injection direction is controlled by setting either longitudinal or oblique electron initial drift speed, i.e. by setting the beam pitch angle. In the case of zero pitch angle, the beam excites only electrostatic, standing waves, oscillating at plasma frequency, in the beam injection spatial location, and only low level EM drift wave noise is also generated. iii In the case of oblique beam pitch angles, again electrostatic waves with same properties are excited. However, now the beam also generates EM waves with the properties commensurate to type III radio bursts. The latter is evidenced by the wavelet analysis of transverse electric field component, which shows that as the beam moves to the regions of lower density, frequency of the EM waves drops accordingly. iv When the density gradient is removed, electron beam with an oblique pitch angle still generates the EM radiation. However, in the latter case no frequency decrease is seen. Within the limitations of the model, the study presents the first attempt to produce simulated dynamical spectrum of type III radio bursts in fully kinetic plasma model. The latter is based on 1.5D nonzero pitch angle nongyrotropic electron beam, that is an alternative to the plasma emission classical mechanism.
QuantumtoClassical Correspondence and HubbardStratonovich Dynamical Systems, a LieAlgebraic Approach ; We propose a Liealgebraic duality approach to analyze nonequilibrium evolution of closed dynamical systems and thermodynamics of interacting quantum lattice models formulated in terms of HubbardStratonovich dynamical systems. The first part of the paper utilizes a geometric Hilbertspaceinvariant formulation of unitary timeevolution, where a quantum Hamiltonian is viewed as a trajectory in an abstract Lie algebra, while the soughtafter evolution operator is a trajectory in a dynamic group, generated by the algebra via exponentiation. The evolution operator is uniquely determined by the timedependent dual generators that satisfy a system of differential equations, dubbed here dual SchrodingerBloch equations, which represent a viable alternative to the conventional Schrodinger formulation. These dual SchrodingerBloch equations are derived and analyzed on a number of specific examples. It is shown that deterministic dynamics of a closed classical dynamical system occurs as action of a symmetry group on a classical manifold and is driven by the same dual generators as in the corresponding quantum problem. This represents quantumtoclassical correspondence. In the second part of the paper, we further extend the Lie algebraic approach to a wide class of interacting manyparticle lattice models. A generalized HubbardStratonovich transform is proposed and it is used to show that the thermodynamic partition function of a generic manybody quantum lattice model can be expressed in terms of traces of singleparticle evolution operators governed by the dynamic HubbardStratonovich fields. Finally, we derive HubbardStratonovich dynamical systems for the BoseHubbard model and a quantum spin model and use the Liealgebraic approach to obtain new nonperturbative dual descriptions of these theories.
Indecomposability parameters in chiral Logarithmic Conformal Field Theory ; Work of the last few years has shown that the key algebraic features of Logarithmic Conformal Field Theories LCFTs are already present in some finite lattice systems such as the XXZ spin12 chain before the continuum limit is taken. This has provided a very convenient way to analyze the structure of indecomposable Virasoro modules and to obtain fusion rules for a variety of models such as boundary percolation etc. LCFTs allow for additional quantum numbers describing the fine structure of the indecomposable modules, and generalizing the bnumber' introduced initially by Gurarie for the c0 case. The determination of these indecomposability parameters has given rise to a lot of algebraic work, but their physical meaning has remained somewhat elusive. In a recent paper, a way to measure b for boundary percolation and polymers was proposed. We generalize this work here by devising a general strategy to compute matrix elements of Virasoro generators from the numerical analysis of lattice models and their continuum limit. The method is applied to XXZ spin12 and spin1 chains with open free boundary conditions. They are related to glnmm and ospn2m2minvariant superspin chains and to nonlinear sigma models with supercoset target spaces. These models can also be formulated in terms of dense and dilute loop gas. We check the method in many cases where the results were already known analytically. Furthermore, we also confront our findings with a construction generalizing Gurarie's, where logarithms emerge naturally in operator product expansions to compensate for apparently divergent terms. This argument actually allows us to compute indecomposability parameters in any logarithmic theory. A central result of our study is the construction of a Kac table for the indecomposability parameters of the logarithmic minimal models LM1,p and LMp,p1.
Invariant characterization of the growing and decaying density modes in LTB dust models ; We obtain covariant expressions that generalize the growing and decaying density modes of linear perturbation theory of dust sources by means of the exact density perturbation from the formalism of quasilocal scalars associated to weighed proper volume averages in LTB dust models. The relation between these density modes and theoretical properties of generic LTB models is thoroughly studied by looking at the evolution of the models through a dynamical system whose phase space is parametrized by variables directly related to the modes themselves. The conditions for absence of shell crossings, as well as sign conditions on the modes, become interrelated fluid flow preserved constraints that define phase space invariant subspaces. In the general case both density modes being nonzero the evolution of phase space trajectories exhibits the expected dominance of the decayinggrowing in the earlylate evolution times defined by pastfuture attractors characterized by asymptotic density inhomogeneity. In particular, the growing mode is also dominant for collapsing layers that terminate in a future attractor associated with a Big Crunch singularity, which is qualitatively different from the past attractor marking the Big Bang. Suppression of the decaying mode modifies the early time evolution, with phase space trajectories emerging from an Einsteinde Sitter past attractor associated with homogeneous conditions. Suppression of the growing mode modifies the late time evolution as phase space trajectories terminate in future attractors associated with homogeneous states. General results are obtained relating the signs of the density modes and the type of asymptotic density profile clump or void. A critical review is given of previous attempts in the literature to define these density modes for LTB models.
Generic inference of inflation models by nonGaussianity and primordial power spectrum reconstruction ; We present a generic inference method for inflation models from observational data by the usage of higherorder statistics of the curvature perturbation on uniform density hypersurfaces. This method is based on the calculation of the posterior for the primordial nonGaussianity parameters ftextNL and gtextNL, which in general depend on specific parameters of inflation and reheating models, and enables to discriminate among the still viable inflation models. To keep analyticity as far as possible to dispense with numerically expensive sampling techniques a saddlepoint approximation is introduced, whose precision is validated for a numerical toy example. The mathematical formulation is done in a generic way so that the approach remains applicable to cosmic microwave background data as well as to large scale structure data. Additionally, we review a few currently interesting inflation models and present numerical toy examples thereof in two and three dimensions to demonstrate the efficiency of the higherorder statistics method. A second quantity of interest is the primordial power spectrum. Here, we present two Bayesian methods to infer it from observational data, the so called critical filter and an extension thereof with smoothness prior, both allowing for a nonparametric spectrum reconstruction. These methods are able to reconstruct the spectra of the observed perturbations and the primordial ones of curvature perturbation even in case of nonGaussianity and partial sky coverage. We argue that observables like T and Bmodes permit to measure both spectra. This also allows to infer the level of nonGaussianity generated since inflation.
An Extended action for the effective field theory of dark energy a stability analysis and a complete guide to the mapping at the basis of EFTCAMB ; We present a generalization of the effective field theory EFT formalism for dark energy and modified gravity models to include operators with higher order spatial derivatives. This allows the extension of the EFT framework to a wider class of gravity theories such as Horava gravity. We present the corresponding extended action, both in the EFT and the ArnowittDeserMisner ADM formalism, and proceed to work out a convenient mapping between the two, providing a self contained and general procedure to translate a given model of gravity into the EFT language at the basis of the EinsteinBoltzmann solver EFTCAMB. Putting this mapping at work, we illustrate, for several interesting models of dark energy and modified gravity, how to express them in the ADM notation and then map them into the EFT formalism. We also provide for the first time, the full mapping of GLPV models into the EFT framework. We next perform a thorough analysis of the physical stability of the generalized EFT action, in absence of matter components. We work out viability conditions that correspond to the absence of ghosts and modes that propagate with a negative speed of sound in the scalar and tensor sector, as well as the absence of tachyonic modes in the scalar sector. Finally, we extend and generalize the phenomenological basis in terms of alphafunctions introduced to parametrize Horndeski models, to cover all theories with higher order spatial derivatives included in our extended action. We elaborate on the impact of the additional functions on physical quantities, such as the kinetic term and the speeds of propagation for scalar and tensor modes.
Machine learning methods for multimedia information retrieval ; In this thesis we examined several multimodal feature extraction and learning methods for retrieval and classification purposes. We reread briefly some theoretical results of learning in Section 2 and reviewed several generative and discriminative models in Section 3 while we described the similarity kernel in Section 4. We examined different aspects of the multimodal image retrieval and classification in Section 5 and suggested methods for identifying quality assessments of Web documents in Section 6. In our last problem we proposed similarity kernel for timeseries based classification. The experiments were carried over publicly available datasets and source codes for the most essential parts are either open source or released. Since the used similarity graphs Section 4.2 are greatly constrained for computational purposes, we would like to continue work with more complex, evolving and capable graphs and apply for different problems such as capturing the rapid change in the distribution e.g. session based recommendation or complex graphs of the literature work. The similarity kernel with the proper metrics reaches and in many cases improves over the stateoftheart. Hence we may conclude generative models based on instance similarities with multiple modes is a generally applicable model for classification and regression tasks ranging over various domains, including but not limited to the ones presented in this thesis. More generally, the Fisher kernel is not only unique in many ways but one of the most powerful kernel functions. Therefore we may exploit the Fisher kernel in the future over widely used generative models, such as Boltzmann Machines Hinton et al., 1984, a particular subset, the Restricted Boltzmann Machines and Deep Belief Networks Hinton et al., 2006, Latent Dirichlet Allocation Blei et al., 2003 or Hidden Markov Models Baum and Petrie, 1966 to name a few.
The Unreasonable Success of Quantum Probability I Quantum Measurements as Uniform Fluctuations ; We introduce a 'uniform tensionreduction' UTR model, which allows to represent the probabilities associated with an arbitrary measurement situation and use it to explain the emergence of quantum probabilities the Born rule as 'uniform' fluctuations on this measurement situation. The model exploits the geometry of simplexes to represent the states, in a way that the measurement probabilities can be derived as the 'Lebesgue measure' of suitably defined convex subregions of the simplexes. We consider a very simple and evocative physical realization of the abstract model, using a material point particle which is acted upon by elastic membranes, which by breaking and collapsing produce the different possible outcomes. This easy to visualize mechanical realization allows one to gain considerable insight into the possible hidden structure of an arbitrary measurement process. We also show that the UTRmodel can be further generalized into a 'general tensionreduction' GTR model, describing conditions of lack of knowledge generated by 'nonuniform' fluctuations. In this ampler framework, particularly suitable to describe experiments in cognitive science, we define and motivate a notion of 'universal measurement', describing the most general possible condition of lack of knowledge in a measurement, emphasizing that the uniform fluctuations characterizing quantum measurements can also be understood as an average over all possible forms of nonuniform fluctuations which can be actualized in a measurement context. This means that the Born rule of quantum mechanics can be understood as a first order approximation of a more general nonuniform theory, thus explaining part of the great success of quantum probability in the description of different domains of reality. This is the first part of a twopart article.
Neutrino mixing matrix and masses at a particular point of the generalized FridbergLee model ; We propose the generalized FriedbergLee model neutrino mass model at a particular point point D at which alphabetafrac13 where at this point, the generalized FridbergLee model is converted to the Democratic mass matrix with the S3 symmetry. The Democratic texture has an experimentally unfavored degenerate mass spectrum on the base of the tribimaximal mixing matrix UTBM. We modify the Democratic mass matrix, at point D, to obtain a nondegenerate mass spectrum by adding the breaking mass term as preserving the twisted FridbergLee symmetry. Although the mixing matrix is still UTBM, where leads to theta130 which is not consistent with the results from Daya Bay and RENO experiments that have established a nonzero value for theta13. preserving the leading behavior of U as tribimaximal, and we apply the Broken Democratic neutrino mass texture as a mass matrix at point D. Subsequently, we characterize a minimal perturbation mass matrix which is responsible for a nonzero theta13 along with CP violation parameters, besides the solar neutrino mass splitting has been resulted from it. Let us mention that, unlike other investigations, the perturbation matrix is not adopted on an ad hoc basis, but is generated only in one step by the rules of perturbation method that we will describe. Subsequently, we develop the following results to the literature a we obtain the corresponding neutrino mixing matrix of the generalized FridbergLee model at point D with theta23fracpi4 and nonzero delta; b the ordering of the neutrino masses is inverted; c we also obtain the allowed range of the mass parameters, the Dirac phase and the Jarlskog parameter which are consistence with the available experimental data.
Validation of a model for estimating the strength of the vortex created by a Vortex Generator from its Bound Circulation ; A hypothesis is tested and validated for predicting the vortex strength induced by a vortex generator in wallbounded flow by combining the knowledge of the Vortex Generator VG geometry and the approaching boundary layer velocity distribution. In this paper, the spanwise distribution of bound circulation on the vortex generator is computed from integrating the pressure force along the VG height calculated using CFD. It is then assumed that all this bound circulation is shed into the wake to fulfill Helmholtz's theorem and then curl up into one primary tip vortex. To validate this, the trailed circulation estimated from the distribution of the bound circulation is compared to the one in the wake behind the vortex generator determined directly from the wake velocities at some downstream distance. In practical situations, the pressure distribution on the vane is unknown and consequently other estimates of the spanwise force distribution on the VG must instead be applied, such as using 2D airfoil data corresponding to the VG geometry at each wallnormal distance. Such models have previously been proposed and used as an engineering tool to aid preliminary VG design and it is not the purpose of this paper to refine such engineering models, but to validate their assumptions such as applying a lifting line model on a VG that has a very low aspect ratio and placed in wall boundary layer. Herein, high Reynolds number boundary layer measurements of VG induced flow were used to validate the Reynolds Averaged NavierStokes RANS modeled circulation results and are used for further illustration and validation of the hypothesis.
Capacitated Network Design Games on a Generalized Fair Allocation Model ; The costsharing connection game is a variant of routing games on a network. In this model, given a directed graph with edge costs and edge capacities, each agent wants to construct a path from a source to a sink with low cost. The users share the cost of each edge based on a costsharing function. One of the simple costsharing functions is defined as the cost divided by the number of users. Most of the previous papers about costsharing connection games addressed this costsharing function. It models an ideal setting where no overhead arises when people share things, though it might be quite rare in real life; it is more realistic to consider the setting that the cost paid by an agent is the original cost per the number of agents using the edge plus the overhead. In this paper, we model the more realistic scenario of costsharing connection games by generalizing costsharing functions. The arguments on the model are based on not concrete costsharing functions but costsharing functions under a reasonable scheme; they are applicable for a broad class of costsharing functions satisfying the following natural properties they are 1 nonincreasing, 2 lower bounded by the original cost per the number of the agents, and 3 upper bounded by the original cost, which enables to represent various scenarios of costsharing. We investigate the Price of Anarchy PoA and the Price of Stability PoS under sumcost and maxcost criteria with the generalized costsharing function. Despite the generalization, we obtain the same tight bounds of PoA and PoS as the costsharing with no overhead except PoS under sumcost. Moreover, for the sumcost case, the lower bound on PoS increases from log n to n1n1 by the generalization, which is also almost tight because the upper bound is n.
Regularization Parameter Estimation for Underdetermined problems by the 2 principle with application to 2D focusing gravity inversion ; The chi2principle generalizes the Morozov discrepancy principle MDP to the augmented residual of the Tikhonov regularized least squares problem. Weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the regularization term is weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional is a random variable following a chi2distribution with mpn degrees of freedom, model matrix G m times n and regularizer L ptimes n. It is proved that the result holds also for mn when mpge n. A Newton rootfinding algorithm is used to find the regularization parameter alpha which yields the optimal inverse covariance weighting in the case of a white noise assumption on the mapped model data. It is implemented for smallscale problems using the generalized singular value decomposition. Numerical results verify the algorithm for the case of regularizers approximating zero to second order derivative approximations, contrasted with the methods of generalized cross validation and unbiased predictive risk estimation. The inversion of underdetermined 2D focusing gravity data produces models with nonsmooth properties, for which typical implementations in this field use the iterative minimum support MS stabilizer and both regularizer and regularizing parameter are updated each iteration. For a simulated data set with noise, the regularization parameter estimation methods for underdetermined data sets are used in this iterative framework, also contrasted with the Lcurve and MDP. Experiments demonstrate efficiency and robustness of the chi2principle, moreover the Lcurve and MDP are generally outperformed. Furthermore, the MS is of general use for the chi2principle when implemented without the knowledge of a mean value of the model.
Meta Matrix Factorization for Federated Rating Predictions ; Federated recommender systems have distinct advantages in terms of privacy protection over traditional recommender systems that are centralized at a data center. However, previous work on federated recommender systems does not fully consider the limitations of storage, RAM, energy and communication bandwidth in a mobile environment. The scales of the models proposed are too large to be easily run on mobile devices. And existing federated recommender systems need to finetune recommendation models on each device, making it hard to effectively exploit collaborative filtering information among usersdevices. Our goal in this paper is to design a novel federated learning framework for rating prediction RP for mobile environments. We introduce a federated matrix factorization MF framework, named meta matrix factorization MetaMF. Given a user, we first obtain a collaborative vector by collecting useful information with a collaborative memory module. Then, we employ a meta recommender module to generate private item embeddings and a RP model based on the collaborative vector in the server. To address the challenge of generating a large number of highdimensional item embeddings, we devise a risedimensional generation strategy that first generates a lowdimensional item embedding matrix and a risedimensional matrix, and then multiply them to obtain highdimensional embeddings. We use the generated model to produce private RPs for the given user on her device. MetaMF shows a high capacity even with a small RP model, which can adapt to the limitations of a mobile environment. We conduct extensive experiments on four benchmark datasets to compare MetaMF with existing MF methods and find that MetaMF can achieve competitive performance. Moreover, we find MetaMF achieves higher RP performance over existing federated methods by better exploiting collaborative filtering among usersdevices.
Probabilistic cosmic web classification using fastgenerated training data ; We present a novel method of robust probabilistic cosmic web particle classification in three dimensions using a supervised machine learning algorithm. Training data was generated using a simplified LambdaCDM toy model with predetermined algorithms for generating halos, filaments, and voids. While this framework is not constrained by physical modeling, it can be generated substantially more quickly than an Nbody simulation without loss in classification accuracy. For each particle in this dataset, measurements were taken of the local density field magnitude and directionality. These measurements were used to train a random forest algorithm, which was used to assign class probabilities to each particle in a LambdaCDM, dark matteronly Nbody simulation with 2563 particles, as well as on another toy model data set. By comparing the trends in the ROC curves and other statistical metrics of the classes assigned to particles in each dataset using different feature sets, we demonstrate that the combination of measurements of the local density field magnitude and directionality enables accurate and consistent classification of halo, filament, and void particles in varied environments We also show that this combination of training features ensures that the construction of our toy model does not affect classification. The use of a fully supervised algorithm allows greater control over the information deemed important for classification, preventing issues arising from hyperparameters and mode collapse in deep learning models. Due to the speed of training data generation, our method is highly scalable, making it particularly suited for classifying large datasets, including observed data.
Annihilator varieties of distinguished modules of reductive Lie algebras ; We provide a microlocal necessary condition for distinction of admissible representations of real reductive groups in the context of spherical pairs. Let bf G be a complex algebraic reductive group, and bf Hsubset G be a spherical algebraic subgroup. Let mathfrakg,mathfrakh denote the Lie algebras of bf G and bf H, and let mathfrakhbot denote the annihilator of mathfrakh in mathfrakg. A mathfrakgmodule is called mathfrakhdistinguished if it admits a nonzero mathfrakhinvariant functional. We show that the maximal bf Gorbit in the annihilator variety of any irreducible mathfrakhdistinguished mathfrakgmodule intersects mathfrakhbot. This generalizes a result of Vogan. We apply this to CasselmanWallach representations of real reductive groups to obtain information on branching problems, translation functors and Jacquet modules. Further, we prove in many cases that as suggested by Prasad, if H is a symmetric subgroup of a real reductive group G, the existence of a tempered Hdistinguished representation of G implies the existence of a generic Hdistinguished representation of G. Many models studied in the theory of automorphic forms involve an additive character on the unipotent radical of bf H, and we devised a twisted version of our theorem that yields necessary conditions for the existence of those mixed models. Our method of proof here is inspired by the theory of Walgebras. As an application we derive necessary conditions for the existence of RankinSelberg, Bessel, Klyachko and Shalika models. Our results are compatible with the recent GanGrossPrasad conjectures for nongeneric representations. We also prove more general results that ease the sphericity assumption on the subgroup, and apply them to local theta correspondence in type II and to degenerate Whittaker models.
Deep Learning Based Single Sample Per Person Face Recognition A Survey ; Face recognition has long been an active research area in the field of artificial intelligence, particularly since the rise of deep learning in recent years. In some practical situations, each identity has only a single sample available for training. Face recognition under this situation is referred to as single sample face recognition and poses significant challenges to the effective training of deep models. Therefore, in recent years, researchers have attempted to unleash more potential of deep learning and improve the model recognition performance in the single sample situation. While several comprehensive surveys have been conducted on traditional single sample face recognition approaches, emerging deep learning based methods are rarely involved in these reviews. Accordingly, we focus on the deep learningbased methods in this paper, classifying them into virtual sample methods and generic learning methods. In the former category, virtual images or virtual features are generated to benefit the training of the deep model. In the latter one, additional multisample generic sets are used. There are three types of generic learning methods combining traditional methods and deep features, improving the loss function, and improving network structure, all of which are covered in our analysis. Moreover, we review face datasets that have been commonly used for evaluating single sample face recognition models and go on to compare the results of different types of models. Additionally, we discuss problems with existing single sample face recognition methods, including identity information preservation in virtual sample methods, domain adaption in generic learning methods. Furthermore, we regard developing unsupervised methods is a promising future direction, and point out that the semantic gap as an important issue that needs to be further considered.
The action of the Virasoro algebra in the twodimensional Potts and loop models at generic Q ; The spectrum of conformal weights for the CFT describing the twodimensional critical Qstate Potts model or its close cousin, the dense loop model has been known for more than 30 years. However, the exact nature of the corresponding hboxVirotimesoverlinehboxVir representations has remained unknown up to now. Here, we solve the problem for generic values of Q. This is achieved by a mixture of different techniques a careful study of KooSaleur generators arXivhepth9312156, combined with measurements of fourpoint amplitudes, on the numerical side, and OPEs and the fourpoint amplitudes recently determined using the interchiral conformal bootstrap in arXiv2005.07258 on the analytical side. We find that nulldescendants of diagonal fields having weights hr,1,hr,1 with rin mathbbN are truly zero, so these fields come with simple hboxVirotimesoverlinehboxVir Kac modules. Meanwhile, fields with weights hr,s,hr,s and hr,s,hr,s with r,sinmathbbN come in indecomposable but not fully reducible representations mixing four simple hboxVirotimesoverlinehboxVir modules with a familiar diamond shape. The top and bottom fields in these diamonds have weights hr,s,hr,s, and form a twodimensional Jordan cell for L0 and barL0. This establishes, among other things, that the Pottsmodel CFT is logarithmic for Q generic. Unlike the case of nongeneric root of unity values of Q, these indecomposable structures are not present in finite size, but we can nevertheless show from the numerical study of the lattice model how the ranktwo Jordan cells build up in the infinitesize limit.
TUTOR Training Neural Networks Using Decision Rules as Model Priors ; The human brain has the ability to carry out new tasks with limited experience. It utilizes prior learning experiences to adapt the solution strategy to new domains. On the other hand, deep neural networks DNNs generally need large amounts of data and computational resources for training. However, this requirement is not met in many settings. To address these challenges, we propose the TUTOR DNN synthesis framework. TUTOR targets tabular datasets. It synthesizes accurate DNN models with limited available data and reduced memorycomputational requirements. It consists of three sequential steps. The first step involves generation, verification, and labeling of synthetic data. The synthetic data generation module targets both the categorical and continuous features. TUTOR generates the synthetic data from the same probability distribution as the real data. It then verifies the integrity of the generated synthetic data using a semantic integrity classifier module. It labels the synthetic data based on a set of rules extracted from the real dataset. Next, TUTOR uses two training schemes that combine synthetic and training data to learn the parameters of the DNN model. These two schemes focus on two different ways in which synthetic data can be used to derive a prior on the model parameters and, hence, provide a better DNN initialization for training with real data. In the third step, TUTOR employs a growandprune synthesis paradigm to learn both the weights and the architecture of the DNN to reduce model size while ensuring its accuracy. We evaluate the performance of TUTOR on nine datasets of various sizes. We show that in comparison to fully connected DNNs, TUTOR, on an average, reduces the need for data by 5.9x, improves accuracy by 3.4, and reduces the number of parameters fFLOPs by 4.7x 4.3x. Thus, TUTOR enables a less datahungry, more accurate, and more compact DNN synthesis.
Sparse generative modeling via parameterreduction of Boltzmann machines application to proteinsequence families ; Boltzmann machines BM are widely used as generative models. For example, pairwise Potts models PM, which are instances of the BM class, provide accurate statistical models of families of evolutionarily related protein sequences. Their parameters are the local fields, which describe sitespecific patterns of aminoacid conservation, and the twosite couplings, which mirror the coevolution between pairs of sites. This coevolution reflects structural and functional constraints acting on protein sequences during evolution. The most conservative choice to describe the coevolution signal is to include all possible twosite couplings into the PM. This choice, typical of what is known as Direct Coupling Analysis, has been successful for predicting residue contacts in the threedimensional structure, mutational effects, and in generating new functional sequences. However, the resulting PM suffers from important overfitting effects many couplings are small, noisy and hardly interpretable; the PM is close to a critical point, meaning that it is highly sensitive to small parameter perturbations. In this work, we introduce a general parameterreduction procedure for BMs, via a controlled iterative decimation of the less statistically significant couplings, identified by an informationbased criterion that selects either weak or statistically unsupported couplings. For several protein families, our procedure allows one to remove more than 90 of the PM couplings, while preserving the predictive and generative properties of the original dense PM, and the resulting model is far away from criticality, hence more robust to noise.
DiffSinger Singing Voice Synthesis via Shallow Diffusion Mechanism ; Singing voice synthesis SVS systems are built to synthesize highquality and expressive singing voice, in which the acoustic model generates the acoustic features e.g., melspectrogram given a music score. Previous singing acoustic models adopt a simple loss e.g., L1 and L2 or generative adversarial network GAN to reconstruct the acoustic features, while they suffer from oversmoothing and unstable training issues respectively, which hinder the naturalness of synthesized singing. In this work, we propose DiffSinger, an acoustic model for SVS based on the diffusion probabilistic model. DiffSinger is a parameterized Markov chain that iteratively converts the noise into melspectrogram conditioned on the music score. By implicitly optimizing variational bound, DiffSinger can be stably trained and generate realistic outputs. To further improve the voice quality and speed up inference, we introduce a shallow diffusion mechanism to make better use of the prior knowledge learned by the simple loss. Specifically, DiffSinger starts generation at a shallow step smaller than the total number of diffusion steps, according to the intersection of the diffusion trajectories of the groundtruth melspectrogram and the one predicted by a simple melspectrogram decoder. Besides, we propose boundary prediction methods to locate the intersection and determine the shallow step adaptively. The evaluations conducted on a Chinese singing dataset demonstrate that DiffSinger outperforms stateoftheart SVS work. Extensional experiments also prove the generalization of our methods on texttospeech task DiffSpeech. Audio samples httpsdiffsinger.github.io. Codes httpsgithub.comMoonInTheRiverDiffSinger. The old title of this work Diffsinger Diffusion acoustic model for singing voice synthesis.
Full counting statistics for interacting trapped fermions ; We study N spinless fermions in their ground state confined by an external potential in one dimension with long range interactions of the general CalogeroSutherland type. For some choices of the potential this system maps to standard random matrix ensembles for general values of the Dyson index beta. In the fermion model beta controls the strength of the interaction, beta2 corresponding to the noninteracting case. We study the quantum fluctuations of the number of fermions cal Ncal D in a domain calD of macroscopic size in the bulk of the Fermi gas. We predict that for general beta the variance of cal Ncal D grows as Abeta log N Bbeta for N gg 1 and we obtain a formula for Abeta and Bbeta. This is based on an explicit calculation for betainleft 1,2,4right and on a conjecture that we formulate for general beta. This conjecture further allows us to obtain a universal formula for the higher cumulants of cal Ncal D. Our results for the variance in the microscopic regime are found to be consistent with the predictions of the Luttinger liquid theory with parameter K 2beta, and allow to go beyond. In addition we present families of interacting fermion models in one dimension which, in their ground states, can be mapped onto random matrix models. We obtain the mean fermion density for these models for general interaction parameter beta. In some cases the fermion density exhibits interesting transitions, for example we obtain a noninteracting fermion formulation of the GrossWittenWadia model.
ContextNER Contextual Phrase Generation at Scale ; Named Entity Recognition NER has seen significant progress in recent years, with numerous stateoftheart SOTA models achieving high performance. However, very few studies have focused on the generation of entities' context. In this paper, we introduce CONTEXTNER, a task that aims to generate the relevant context for entities in a sentence, where the context is a phrase describing the entity but not necessarily present in the sentence. To facilitate research in this task, we also present the EDGAR10Q dataset, which consists of annual and quarterly reports from the top 1500 publicly traded companies. The dataset is the largest of its kind, containing 1M sentences, 2.8M entities, and an average of 35 tokens per sentence, making it a challenging dataset. We propose a baseline approach that combines a phrase generation algorithm with inferencing using a 220M language model, achieving a ROUGEL score of 27 on the test split. Additionally, we perform a oneshot inference with ChatGPT, which obtains a 30 ROUGEL, highlighting the difficulty of the dataset. We also evaluate models such as T5 and BART, which achieve a maximum ROUGEL of 49 after supervised finetuning on EDGAR10Q. We also find that T5large, when prefinetuned on EDGAR10Q, achieve SOTA results on downstream finance tasks such as Headline, FPB, and FiQA SA, outperforming vanilla version by 10.81 points. To our surprise, this 66x smaller prefinetuned model also surpasses the financespecific LLM BloombergGPT50B by 15 points. We hope that our dataset and generated artifacts will encourage further research in this direction, leading to the development of more sophisticated language models for financial text analysis
Improving Stack Overflow question title generation with copying enhanced CodeBERT model and bimodal information ; Context Stack Overflow is very helpful for software developers who are seeking answers to programming problems. Previous studies have shown that a growing number of questions are of low quality and thus obtain less attention from potential answerers. Gao et al. proposed an LSTMbased model i.e., BiLSTMCC to automatically generate question titles from the code snippets to improve the question quality. However, only using the code snippets in the question body cannot provide sufficient information for title generation, and LSTMs cannot capture the longrange dependencies between tokens. Objective This paper proposes CCBERT, a deep learning based novel model to enhance the performance of question title generation by making full use of the bimodal information of the entire question body. Method CCBERT follows the encoderdecoder paradigm and uses CodeBERT to encode the question body into hidden representations, a stacked Transformer decoder to generate predicted tokens, and an additional copy attention layer to refine the output distribution. Both the encoder and decoder perform the multihead selfattention operation to better capture the longrange dependencies. This paper builds a dataset containing around 200,000 highquality questions filtered from the data officially published by Stack Overflow to verify the effectiveness of the CCBERT model. Results CCBERT outperforms all the baseline models on the dataset. Experiments on both codeonly and lowresource datasets show the superiority of CCBERT with less performance degradation. The human evaluation also shows the excellent performance of CCBERT concerning both readability and correlation criteria.
LHC Signatures of Flavoured Vector Leptoquarks ; We consider the phenomenological signatures of Simplified Models of Flavourful Leptoquarks, whose BeyondtheStandard Model SM couplings to fermion generations occur via textures that are well motivated from a broad class of ultraviolet flavour models which we briefly review. We place particular emphasis on the study of the vector leptoquark Deltamu with assignments leftbf3, bf1, 23 right under the SM's gauge symmetry, SU3C times SU2L times U1Y, which has the tantalising possibility of explaining both mathcalRKstar and mathcalRDstar anomalies. Upon performing global likelihood scans of the leptoquark's coupling parameter space, focusing in particular on models with treelevel couplings to a single charged lepton species, we then provide confidence intervals and benchmark points preferred by lowerenergy flavour data. Finally, we use these constraints to further evaluate the promising Large Hadron Collider LHC detection prospects of pairs of tauflavoured Deltamu, through their distinct asymmetric decay channels. Namely, we consider direct thirdgeneration leptoquark and jets plus missingenergy searches at the LHC, which we find to be complementary. Depending on the simplified model under consideration, the direct searches constrain the Deltamu mass up to 15001770 GeV when the branching fraction of Deltamu is entirely to thirdgeneration quarks but are significantly reduced with decreased branching ratios to the third generation, whereas the missingenergy searches constrain the mass up to 11501700 GeV while being largely insensitive to the thirdgeneration branching fraction.
What is redundant and what is not Computational tradeoffs in modelling to generate alternatives for energy infrastructure deployment ; Given the urgent need to devise credible, deep strategies for carbon neutrality, approaches for modelling to generate alternatives' MGA are gaining popularity in the energy sector. Yet, MGA faces limitations when applied to stateoftheart energy system models the number of alternatives that can be generated is virtually infinite; no realistic computational effort can discover the complete technology and spatial diversity. Here, based on our own SPORES method, a highly customisable and spatiallyexplicit advancement of MGA, we empirically test different search strategies including some adapted from other MGA approaches with the aim of identifying how to minimise redundant computation. With application to a model of the European power system, we show that, for a fixed number of generated alternatives, there is a clear tradeoff in making use of the available computational power to unveil technology versus spatial diversity of system configurations. Moreover, we show that focussing on technology diversity may fail to identify system configurations that appeal to realworld stakeholders, such as those in which capacity is more spread out at the local scale. Based on this evidence that no feasible alternative can be deemed redundant a priori, we propose to initially search for options in a way that balances spatial and technology diversity; this can be achieved by combining the strengths of two different strategies. The resulting solution space can then be refined based on the feedback of stakeholders. More generally, we propose the adoption of adhoc MGA sensitivity analyses, targeted at testing a study's central claims, as a computationally inexpensive standard to improve the quality of energy modelling analyses.
Morphologypreserving Autoregressive 3D Generative Modelling of the Brain ; Human anatomy, morphology, and associated diseases can be studied using medical imaging data. However, access to medical imaging data is restricted by governance and privacy concerns, data ownership, and the cost of acquisition, thus limiting our ability to understand the human body. A possible solution to this issue is the creation of a model able to learn and then generate synthetic images of the human body conditioned on specific characteristics of relevance e.g., age, sex, and disease status. Deep generative models, in the form of neural networks, have been recently used to create synthetic 2D images of natural scenes. Still, the ability to produce highresolution 3D volumetric imaging data with correct anatomical morphology has been hampered by data scarcity and algorithmic and computational limitations. This work proposes a generative model that can be scaled to produce anatomically correct, highresolution, and realistic images of the human brain, with the necessary quality to allow further downstream analyses. The ability to generate a potentially unlimited amount of data not only enables largescale studies of human anatomy and pathology without jeopardizing patient privacy, but also significantly advances research in the field of anomaly detection, modality synthesis, learning under limited data, and fair and ethical AI. Code and trained models are available at httpsgithub.comAmigoLabSynthAnatomy.
Quantification of CO2 generation in sedimentary basins through Carbonate Clays Reactions with uncertain thermodynamic parameters ; We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by carbonateclays reactions CCR in largescale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a onedimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows i estimating the depth at which the source of gases is located and ii quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of i the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, ii the order of magnitude of the CO2 generation rate that can be associated with CCR processes.
GLFF Global and Local Feature Fusion for AIsynthesized Image Detection ; With the rapid development of deep generative models such as Generative Adversarial Networks and Diffusion models, AIsynthesized images are now of such high quality that humans can hardly distinguish them from pristine ones. Although existing detection methods have shown high performance in specific evaluation settings, e.g., on images from seen models or on images without realworld postprocessing, they tend to suffer serious performance degradation in realworld scenarios where testing images can be generated by more powerful generation models or combined with various postprocessing operations. To address this issue, we propose a Global and Local Feature Fusion GLFF framework to learn rich and discriminative representations by combining multiscale global features from the whole image with refined local features from informative patches for AI synthesized image detection. GLFF fuses information from two branches the global branch to extract multiscale semantic features and the local branch to select informative patches for detailed local artifacts extraction. Due to the lack of a synthesized image dataset simulating realworld applications for evaluation, we further create a challenging fake image dataset, named DeepFakeFaceForensics DF 3 , which contains 6 stateoftheart generation models and a variety of postprocessing techniques to approach the realworld scenarios. Experimental results demonstrate the superiority of our method to the stateoftheart methods on the proposed DF 3 dataset and three other opensource datasets.
Finetuning language models to find agreement among humans with diverse preferences ; Recent work in large language modeling LLMs has used finetuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single generic user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge how might a machine help people with diverse views find agreement We finetune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues e.g., should we raise taxes on the rich, and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation social welfare functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs 70 and significantly outperforms a tight finetuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best humangenerated opinions 65. We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.
Foresight Generative Pretrained Transformer GPT for Modelling of Patient Timelines using EHRs ; Background Electronic Health Records hold detailed longitudinal information about each patient's health status and general clinical history, a large portion of which is stored within the unstructured text. Existing approaches focus mostly on structured data and a subset of singledomain outcomes. We explore how temporal modelling of patients from free text and structured data, using deep generative transformers can be used to forecast a wide range of future disorders, substances, procedures or findings. Methods We present Foresight, a novel transformerbased pipeline that uses named entity recognition and linking tools to convert document text into structured, coded concepts, followed by providing probabilistic forecasts for future medical events such as disorders, substances, procedures and findings. We processed the entire freetext portion from three different hospital datasets totalling 811336 patients covering both physical and mental health. Findings On tests in two UK hospitals King's College Hospital, South London and Maudsley and the US MIMICIII dataset precision10 0.68, 0.76 and 0.88 was achieved for forecasting the next disorder in a patient timeline, while precision10 of 0.80, 0.81 and 0.91 was achieved for forecasting the next biomedical concept. Foresight was also validated on 34 synthetic patient timelines by five clinicians and achieved relevancy of 97 for the top forecasted candidate disorder. As a generative model, it can forecast followon biomedical concepts for as many steps as required. Interpretation Foresight is a generalpurpose model for biomedical concept modelling that can be used for realworld risk forecasting, virtual trials and clinical research to study the progression of disorders, simulate interventions and counterfactuals, and educational purposes.
Strong CP Problem and Symmetric Mass Solution ; We propose a novel solution to the Strong CP problem to explain why SU3 strong force has a nearly zero theta angle bartheta3 simeq 0 for the 4d Standard Model SM. The new ingredient is Symmetric Mass Generation SMG symmetrypreserving mass or energy gap can be generated without breaking any symmetry G and without any quadratic meanfield mass deformation as long as G is all perturbative local and nonperturbative global anomalyfree. In our first model, we propose a disordered nonmeanfield SMG gap instead of the ordered AndersonHiggs induced mass gap for the u quark or generally a set of quarks and leptons totally anomalyfree in G generated by multifermion interactions or by dynamical disordered mass fields, absorbing bartheta3. Another variant of this first model is the SMG gapping a hypothetical hidden full fourth family of SM fermions. In our second model, we have a chiral SM and mirror SM together to respect the NielsenNinomiya fermiondoubling and a parityreflection mathbbZ2rm PR symmetry at high energy, so the bartheta3 0. Then the SMG lifts only the mirror SM with a large energy gap but leaves the chiral SM at lower energy, which not only spontaneously breaks the parityreflection symmetry maximally but also relates our solution to solve another nonperturbative chiral fermion regularization problem by removing the fermion doubling. The predictive signature of both SMGbased models is that some SM fermions or mirror fermions are highly interacting beyond the conventional SM Higgs or SM gauge interactions mediated through hypothetical direct multifermion or disordered massfield interactions.
Variational Information Pursuit for Interpretable Predictions ; There is a growing interest in the machine learning community in developing predictive algorithms that are interpretable by design. Towards this end, recent work proposes to make interpretable decisions by sequentially asking interpretable queries about data until a prediction can be made with high confidence based on the answers obtained the history. To promote short queryanswer chains, a greedy procedure called Information Pursuit IP is used, which adaptively chooses queries in order of information gain. Generative models are employed to learn the distribution of queryanswers and labels, which is in turn used to estimate the most informative query. However, learning and inference with a full generative model of the data is often intractable for complex tasks. In this work, we propose Variational Information Pursuit VIP, a variational characterization of IP which bypasses the need for learning generative models. VIP is based on finding a query selection strategy and a classifier that minimizes the expected crossentropy between true and predicted labels. We then demonstrate that the IP strategy is the optimal solution to this problem. Therefore, instead of learning generative models, we can use our optimal strategy to directly pick the most informative query given any history. We then develop a practical algorithm by defining a finitedimensional parameterization of our strategy and classifier using deep networks and train them endtoend using our objective. Empirically, VIP is 10100x faster than IP on different Vision and NLP tasks with competitive performance. Moreover, VIP finds much shorter query chains when compared to reinforcement learning which is typically used in sequentialdecisionmaking problems. Finally, we demonstrate the utility of VIP on challenging tasks like medical diagnosis where the performance is far superior to the generative modelling approach.
Target Specific De Novo Design of Drug Candidate Molecules with Graph Transformerbased Generative Adversarial Networks ; Discovering novel drug candidate molecules is one of the most fundamental and critical steps in drug development. Generative deep learning models, which create synthetic data given a probability distribution, have been developed with the purpose of picking completely new samples from a partially known space. Generative models offer high potential for designing de novo molecules; however, in order for them to be useful in reallife drug development pipelines, these models should be able to design targetspecific molecules, which is the next step in this field. In this study, we propose DrugGEN, for the de novo design of drug candidate molecules that interact with selected target proteins. The proposed system represents compounds and protein structures as graphs and processes them via serially connected two generative adversarial networks comprising graph transformers. DrugGEN is trained using a large dataset of compounds from ChEMBL and targetspecific bioactive molecules, to design effective and specific inhibitory molecules against the AKT1 protein, which has critical importance for developing treatments against various types of cancer. On fundamental benchmarks, DrugGEN models have either competitive or better performance against other methods. To assess the targetspecific generation performance, we conducted further in silico analysis with molecular docking and deep learningbased bioactivity prediction. Results indicate that de novo molecules have high potential for interacting with the AKT1 protein structure in the level of its native ligand. DrugGEN can be used to design completely novel and effective targetspecific drug candidate molecules for any druggable protein, given target features and a dataset of experimental bioactivities. Code base, datasets, results and trained models of DrugGEN are available at httpsgithub.comHUBioDataLabDrugGEN
PatchZero ZeroShot Automatic Patch Correctness Assessment ; Automated Program Repair APR techniques have shown more and more promising results in fixing realworld bugs. Despite the effectiveness, APR techniques still face an overfitting problem a generated patch can be incorrect although it passes all tests. It is timeconsuming to manually evaluate the correctness of generated patches that can pass all tests. To address this problem, many approaches have been proposed to automatically assess the correctness of patches generated by APR techniques. However, existing approaches require a large set of manually labeled patches as the training data. To mitigate the issue, in this study, we propose PatchZero, the patch correctness assessment by adopting large pretrained models. Specifically, for patches generated by a new or unseen APR tool, PatchZero does not need labeled patches of this new or unseen APR tool for training i.e., zeroshot but directly queries the large pretrained model to get predictions on the correctness labels without training. In this way, PatchZero can reduce the manual labeling effort when building a model to automatically assess the correctness of generated patches of new APR tools. To provide knowledge regarding the automatic patch correctness assessment APCA task to the large pretrained models, we also design an instancewise demonstration formation strategy by using contrastive learning. Specifically, PatchZero selects semantically similar patches to help the large pretrained model to give more accurate predictions on the unlabeled patches. Our experimental results showed that PatchZero can achieve an accuracy of 82.7 and an F1score of 86.0 on average although no labeled patch of the new or unseen APR tool is available. In addition, our proposed technique outperformed the prior stateoftheart by a large margin.
Signal identification without signal formulation ; When there are signals and noises, physicists try to identify signals by modeling them, whereas statisticians oppositely try to model noise to identify signals. In this study, we applied the statisticians' concept of signal detection of physics data with smallsize samples and high dimensions without modeling the signals. Most of the data in nature, whether noises or signals, are assumed to be generated by dynamical systems; thus, there is essentially no distinction between these generating processes. We propose that the correlation length of a dynamical system and the number of samples are crucial for the practical definition of noise variables among the signal variables generated by such a system. Since variables with shortterm correlations reach normal distributions faster as the number of samples decreases, they are regarded to be noiselike'' variables, whereas variables with opposite properties are signallike'' variables. Normality tests are not effective for data of smallsize samples with high dimensions. Therefore, we modeled noises on the basis of the property of a noise variable, that is, the uniformity of the histogram of the probability that a variable is a noise. We devised a method of detecting signal variables from the structural change of the histogram according to the decrease in the number of samples. We applied our method to the data generated by globally coupled map, which can produce time series data with different correlation lengths, and also applied to gene expression data, which are typical static data of smallsize samples with high dimensions, and we successfully detected signal variables from them. Moreover, we verified the assumption that the gene expression data also potentially have a dynamical system as their generation model, and found that the assumption is compatible with the results of signal extraction.
Synthetic DOmainTargeted Augmentation SDOTA Improves Model Generalization in Digital Pathology ; Machine learning algorithms have the potential to improve patient outcomes in digital pathology. However, generalization of these tools is currently limited by sensitivity to variations in tissue preparation, staining procedures and scanning equipment that lead to domain shift in digitized slides. To overcome this limitation and improve model generalization, we studied the effectiveness of two Synthetic DOmainTargeted Augmentation SDOTA methods, namely CycleGANenabled Scanner Transform ST and targeted Stain Vector Augmentation SVA, and compared them against the International Color Consortium ICC profilebased color calibration ICC Cal method and a baseline method using traditional brightness, color and noise augmentations. We evaluated the ability of these techniques to improve model generalization to various tasks and settings four models, two model types tissue segmentation and cell classification, two loss functions, six labs, six scanners, and three indications hepatocellular carcinoma HCC, nonalcoholic steatohepatitis NASH, prostate adenocarcinoma. We compared these methods based on the macroaveraged F1 scores on indistribution ID and outofdistribution OOD test sets across multiple domains, and found that SDOTA methods i.e., ST and SVA led to significant improvements over ICC Cal and baseline on OOD data while maintaining comparable performance on ID data. Thus, we demonstrate that SDOTA may help address generalization due to domain shift in real world applications.
Professional Basketball Player Behavior Synthesis via Planning with Diffusion ; Dynamically planning in multiagent systems has been explored to improve decisionmaking in various domains. Professional basketball serves as a compelling example of a dynamic spatiotemporal game, encompassing both concealed strategic policies and decisionmaking. However, processing the diverse oncourt signals and navigating the vast space of potential actions and outcomes makes it difficult for existing approaches to swiftly identify optimal strategies in response to evolving circumstances. In this study, we first formulate the sequential decisionmaking process as a conditional trajectory generation process. We further introduce PLAYBEST PLAYer BEhavior SynThesis, a method for enhancing player decisionmaking. We extend the stateoftheart generative model, diffusion probabilistic model, to learn challenging multiagent environmental dynamics from historical National Basketball Association NBA player motion tracking data. To incorporate datadriven strategies, an auxiliary value function is trained using the playbyplay data with corresponding rewards acting as the plan guidance. To accomplish rewardguided trajectory generation, conditional sampling is introduced to condition the diffusion model on the value function and conduct classifierguided sampling. We validate the effectiveness of PLAYBEST via comprehensive simulation studies from realworld data, contrasting the generated trajectories and play strategies with those employed by professional basketball teams. Our results reveal that the model excels at generating highquality basketball trajectories that yield efficient plays, surpassing conventional planning techniques in terms of adaptability, flexibility, and overall performance. Moreover, the synthesized play strategies exhibit a remarkable alignment with professional tactics, highlighting the model's capacity to capture the intricate dynamics of basketball games.
Studying Large Language Model Generalization with Influence Functions ; When trying to gain better visibility into a machine learning model in order to understand and mitigate the associated risks, a potentially valuable source of evidence is which training examples most contribute to a given behavior Influence functions aim to answer a counterfactual how would the model's parameters and hence its outputs change if a given sequence were added to the training set While influence functions have produced insights for small models, they are difficult to scale to large language models LLMs due to the difficulty of computing an inverseHessianvector product IHVP. We use the Eigenvaluecorrected KroneckerFactored Approximate Curvature EKFAC approximation to scale influence functions up to LLMs with up to 52 billion parameters. In our experiments, EKFAC achieves similar accuracy to traditional influence function estimators despite the IHVP computation being orders of magnitude faster. We investigate two algorithmic techniques to reduce the cost of computing gradients of candidate training sequences TFIDF filtering and query batching. We use influence functions to investigate the generalization patterns of LLMs, including the sparsity of the influence patterns, increasing abstraction with scale, math and programming abilities, crosslingual generalization, and roleplaying behavior. Despite many apparently sophisticated forms of generalization, we identify a surprising limitation influences decay to nearzero when the order of key phrases is flipped. Overall, influence functions give us a powerful new tool for studying the generalization properties of LLMs.
SAM Meets Robotic Surgery An Empirical Study on Generalization, Robustness and Adaptation ; The Segment Anything Model SAM serves as a fundamental model for semantic segmentation and demonstrates remarkable generalization capabilities across a wide range of downstream scenarios. In this empirical study, we examine SAM's robustness and zeroshot generalizability in the field of robotic surgery. We comprehensively explore different scenarios, including prompted and unprompted situations, bounding box and pointsbased prompt approaches, as well as the ability to generalize under corruptions and perturbations at five severity levels. Additionally, we compare the performance of SAM with stateoftheart supervised models. We conduct all the experiments with two wellknown robotic instrument segmentation datasets from MICCAI EndoVis 2017 and 2018 challenges. Our extensive evaluation results reveal that although SAM shows remarkable zeroshot generalization ability with bounding box prompts, it struggles to segment the whole instrument with pointbased prompts and unprompted settings. Furthermore, our qualitative figures demonstrate that the model either failed to predict certain parts of the instrument mask e.g., jaws, wrist or predicted parts of the instrument as wrong classes in the scenario of overlapping instruments within the same bounding box or with the pointbased prompt. In fact, SAM struggles to identify instruments in complex surgical scenarios characterized by the presence of blood, reflection, blur, and shade. Additionally, SAM is insufficiently robust to maintain high performance when subjected to various forms of data corruption. We also attempt to finetune SAM using Lowrank Adaptation LoRA and propose SurgicalSAM, which shows the capability in classwise mask prediction without prompt. Therefore, we can argue that, without further domainspecific finetuning, SAM is not ready for downstream surgical tasks.
The Theoretical MassMagnitude Relation of LowMass Stars and its Metallicity Dependence ; We investigate the dependence of theoretically generated mass absolute magnitude relations on stellar models. Using up to date physics we compute models in the mass range 0.1 m 1Msun. We compare the solarmetallicity models with our older models, with recent models computed by others, and also with an empirical mass absolute magnitude relation that best fits the observed data. At a given mass below 0.6Msun the effective temperatures differ substantially from model to model. However taken individually each set of models is in good agreement with observations in the mass luminosity plane. A minimum in the derivative dmdMV at MV 11.5, which is due to H2 formation and establishment of a fully convective stellar interior, is present in all photometric bands, for all models. This minimum leads to a maximum in the stellar luminosity function for Galactic disk stars at MV 11.5, Mbol 9.8. Stellar models should locate this maximum in the stellar luminosity function at the same magnitude as observations. Models which incorporate the most realistic theoretical atmospheres and the most recent equation of state and opacities can satisfy this constraint. These models are also in best agreement with the most recent luminosity effective temperature and massluminosity data. Each set of our models of a given metallicity with 0.2 FeH 2.3 shows a maximum in dmdMbol, which moves to brighter bolometric magnitudes with decreasing metallicity. The change in location of the maximum, as a function of FeH, follows the location of structure in luminosity functions for stellar populations with different metal abundances. This structure seen in all observed stellar populations can be accounted for by the massluminosity relation.
Standard Models and Split Supersymmetry from Intersecting Brane Orbifolds ; We construct four dimensional three generation nonsupersymmetric SU3c times SU2L times U1Y intersecting D6brane models with nuRrqs. At three stacks we find exactly the MSSM chiral fermion matter spectrum. At 4, 5stacks we find models with the massless fermion spectrum of the N1 Standard Model and massive exotic nonchiral matter; these models flow also to only the SM. At 8stacks we find MSSMlike models, with minimal massless exotics, made from two different N1 sectors. Exotic triplet masses put a lower bound on the string scale of 2.792.89 times 106 GeV for a Higgs 124126 GeV. Itrqs the first appearance of N0 stringy quivers with the MSSM and matter in antisymmetric representations and perturbatively missing Yukawa couplings. The present models are based on orientifolds of bf T6Z3 times Z3 compactifications of IIA theory based on the torus lattice AAA; all complex moduli are fixed by the orbifold symmetry. We also present the spectrum rules GS anomaly cancellation for the ABB lattice. Moreover, we point out the relevance of intersectingand present D6brane constructions on ideas related to existence of split supersymmetry in nature. In this context we present nonsusy models with only the SMmatter and also MSSMmatter dominated models, with massive gauginos and light higgsinos, that achieve the correct supersymmetric GUT value for the Weinberg angle sin2 theta frac38 at a string scale 5 cdot 1013 GeV MS 1.4 cdot 1017 GeV. It appears that if only the SM survives at low energy the unification scale is preserved at 5.03 times 1013 GeV when nH 1, 3, 6. These models support the existence of split supersymmetry scenario in string theory.
Methodological Issues in Building, Training, and Testing Artificial Neural Networks ; We review the use of artificial neural networks, particularly the feedforward multilayer perceptron with backpropagation for training MLP, in ecological modelling. Overtraining on data or giving vague references to how it was avoided is the major problem. Various methods can be used to determine when to stop training in artificial neural networks 1 early stopping based on crossvalidation, 2 stopping after a analyst defined error is reached or after the error levels off, 3 use of a test data set. We do not recommend the third method as the test data set is then not independent of model development. Many studies used the testing data to optimize the model and training. Although this method may give the best model for that set of data it does not give generalizability or improve understanding of the study system. The importance of an independent data set cannot be overemphasized as we found dramatic differences in model accuracy assessed with prediction accuracy on the training data set, as estimated with bootstrapping, and from use of an independent data set. The comparison of the artificial neural network with a general linear model GLM as a standard procedure is recommended because a GLM may perform as well or better than the MLP. MLP models should not be treated as black box models but instead techniques such as sensitivity analyses, input variable relevances, neural interpretation diagrams, randomization tests, and partial derivatives should be used to make the model more transparent, and further our ecological understanding which is an important goal of the modelling process. Based on our experience we discuss how to build a MLP model and how to optimize the parameters and architecture.
Effective growth of matter density fluctuations in the running LCDM and LXCDM models ; We investigate the matter density fluctuations deltarhorho for two dark energy DE models in the literature in which the cosmological term Lambda is a running parameter. In the first model, the running LCDM model, matter and DE exchange energy, whereas in the second model, the LXCDM model, the total DE and matter components are conserved separately. The LXCDM model was proposed as an interesting solution to the cosmic coincidence problem. It includes an extra dynamical component, the cosmon X, which interacts with the running Lambda, but not with matter. In our analysis we make use of the current value of the linear bias parameter, b20 PGGPMM, where PMM deltarhorho2 is the present matter power spectrum and PGG is the galaxy fluctuation power spectrum. The former can be computed within a given model, and the latter is found from the observed LSS data at small z obtained by the 2dF galaxy redshift survey. It is found that b201 within a 10 accuracy for the standard LCDM model. Adopting this limit for any DE model and using a method based on the effective equation of state for the DE, we can set a limit on the growth of matter density perturbations for the running LCDM model, the solution of which is known. This provides a good test of the procedure, which we then apply to the LXCDM model in order to determine the physical region of parameter space, compatible with the LSS data. In this region, the LXCDM model is consistent with known observations and provides at the same time a viable solution to the cosmic coincidence problem.
Modelling the spring ozone maximum and the interhemispheric asymmetry in the remote marine boundary layer 1. Comparison with surface and ozonesonde measurements ; Here we report a modelling study of the spring ozone maximum and its interhemispheric asymmetry in the remote marine boundary layer MBL. The modelled results are examined at the surface and on a series of timeheight cross sections at several locations spread over the Atlantic, the Indian, and the Pacific Oceans. Comparison of model with surface measurements at remote MBL stations indicate a close agreement. The most striking feature of the hemispheric spring ozone maximum in the MBL can be most easily identified at the NH sites of Westman Island, Bermuda, and Mauna Loa, and at the SH site of Samoa. Modelled ozone vertical distributions in the troposphere are compared with ozone profiles. For the Atlantic and the Indian sites, the model generally produces a hemispheric spring ozone maximum close to those of the measurements. The model also produces a spring ozone maximum in the northeastern and tropical north Pacific close to those measurements, and at sites in the NH high latitudes. The good agreement between model and measurements indicate that the model can reproduce the proposed mechanisms responsible for producing the spring ozone maximum in these regions of the MBL, lending confidence in the use of the model to investigate MBL ozone chemistry see part 2 and part 3. The spring ozone maximum in the tropical central south Pacific and eastern equatorial Pacific are less well reproduced by the model, indicating that both the transport of O3 precursors from biomass burning emissions taking place in southeastern Asia, Australia, Oceania, southern Africa, and South America are not well represented in the model in these regions. Overall, the model produces a better simulation at sites where the stratosphere and biomass burning emissions are the major contributors.
Nearinfrared integrated spectra of Galactic globular clusters testing simple stellar population models ; We present SOAROSIRIS crossdispersed NIR integrated spectra of 12 Galactic globular clusters that are employed to test Maraston 2005, M05 NIR EPS models, and to provide spectral observational constraints to calibrate future models. We measured Ew of the most prominent NIR absorption features. Optical Ew were also measured. The globular clusters Ew were compared with model predictions with ages within 415 Gyr, and metallicities between 1200 and 2 Zsun. Observed integrated colours were also compared with models. The NIR integrated spectra among our sample appear qualitatively similar in most the absorption features. The M05 models can properly predict the optical Ew observed in globular clusters. Regarding the NIR, they do underestimate the strength of Mg I 1.49mum, but they can reproduce the observed Ew of Fe I 1.58mum, Si I 1.59mum, and CO 2.29mum, in about half of our sample. The remaining objects require the inclusion of intermediateage populations. Thus, we suggest that the presence of C and Orich stars in models is important to reproduce the observed strengths of metallic lines. Another possibility is the lack of alphaenhancement in the models. In the case of the optical and NIR Fe I lines, standard models and those that include blue horizontal branch stars, produce similar results. A similar trend is observed for Na I 5895A, while in the case of the Gband, the models with blue horizontal branch do describe better the observations. For most of the sample the optical to NIR colours are well described by the M05 models. In general, M05 models can provide reliable information on the NIR stellar population of galaxies, but only when Ew and colours are taken together, in other words, Ew and continuum fluxes should be simultaneously fitted. However, the results should be taken with caution, since the models tend to predict results biased towards young ages.
Quantitative Genetics and FunctionalStructural Plant Growth Models Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization ; Background and Aims Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype x environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functionalstructural growth model, which gives access to more fundamental traits for quantitative trait loci QTL detection and thus to promising tools for yield optimization. Methods The GreenLab model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings of the speciesspecific parameters of the model. The QTL Cartographer software was used to study QTL detection of simulated plant traits. A genetic algorithm was implemented to define the ideotype for yield maximization based on the model parameters and the associated allelic combination. Key Results and Conclusions By keeping the environmental factors constant and using a virtual population with a large number of individuals generated by a Mendelian genetic model, results for an ideal case could be simulated. Virtual QTL detection was compared in the case of phenotypic traits such as cob weight and when traits were model parameters, and was found to be more accurate in the latter case. The practical interest of this approach is illustrated by calculating the parameters and the corresponding genotype associated with yield optimization of a GreenLab maize model. The paper discusses the potentials of GreenLab to represent environment x genotype interactions, in particular through its main state variable, the ratio of biomass supply over demand.
When the optimal is not the best parameter estimation in complex biological models ; Background The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under datapoor conditions may result in biologically implausible values. Results We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit. Conclusions The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid forcefitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system, and point to the need of a theory that addresses this problem more generally.
Towards a Better Understanding of Large Scale Network Models ; Connectivity and capacity are two fundamental properties of wireless multihop networks. The scalability of these properties has been a primary concern for which asymptotic analysis is a useful tool. Three related but logically distinct network models are often considered in asymptotic analyses, viz. the dense network model, the extended network model and the infinite network model, which consider respectively a network deployed in a fixed finite area with a sufficiently large node density, a network deployed in a sufficiently large area with a fixed node density, and a network deployed in Re2 with a sufficiently large node density. The infinite network model originated from continuum percolation theory and asymptotic results obtained from the infinite network model have often been applied to the dense and extended networks. In this paper, through two case studies related to network connectivity on the expected number of isolated nodes and on the vanishing of components of finite order k1 respectively, we demonstrate some subtle but important differences between the infinite network model and the dense and extended network models. Therefore extra scrutiny has to be used in order for the results obtained from the infinite network model to be applicable to the dense and extended network models. Asymptotic results are also obtained on the expected number of isolated nodes, the vanishingly small impact of the boundary effect on the number of isolated nodes and the vanishing of components of finite order k1 in the dense and extended network models using a generic random connection model.
Multicanonical simulation of the DombJoyce model and the Go model new enumeration methods for selfavoiding walks ; We develop statistical enumeration methods for selfavoiding walks using a powerful sampling technique called the multicanonical Monte Carlo method. Using these methods, we estimate the numbers of the two dimensional Nstep selfavoiding walks up to N256 with statistical errors. The developed methods are based on statistical mechanical models of paths which include selfavoiding walks. The criterion for selecting a suitable model for enumerating selfavoiding walks is whether or not the configuration space of the model includes a set for which the number of the elements can be exactly counted. We call this set a scale fixing set. We selected the following two models which satisfy the criterion the Go model for lattice proteins and the DombJoyce model for generalized random walks. There is a contrast between these two models in the structures of the configuration space. The configuration space of the Go model is defined as the universal set of selfavoiding walks, and the set of the ground state conformation provides a scale fixing set. On the other hand, the configuration space of the DombJoyce model is defined as the universal set of random walks which can be used as a scale fixing set, and the set of the ground state conformation is the same as the universal set of selfavoiding walks. From the perspective of enumeration performance, we conclude that the DombJoyce model is the better of the two. The reason for the performance difference is partly explained by the existence of the firstorder phase transition of the Go model.
The Myogenic Response in Isolated Rat Cerebrovascular Arteries Smooth Muscle Cell Model ; Previous models of the cerebrovascular smooth muscle cell have not addressed the interaction between the electrical, chemical and mechanical components of cell function during the development of active tension. These models are primarily electrical, biochemical or mechanical in their orientation, and do not permit a full exploration of how the smooth muscle responds to electrical or mechanical forcing. To address this issue, we have developed a new model that consists of two major components electrochemical and chemomechanical subsystems of the cell. Included in the electrochemical model are models of the electrophysiological behavior of the cell membrane, fluid compartments, Ca2 release and uptake by the sarcoplasmic reticulum, and cytosolic Ca2 buffering, particularly by calmodulin. With this subsystem model, we can study the mechanics of the production of intracellular Ca2 transient in response to membrane voltage clamp pulses. The chemomechanical model includes models of a the chemical kinetics of myosin phosphorylation, and the formation of phosphorylated myosin crossbridges with actin, as well as, attached latchtype crossbridges; and b a model of force generation and mechanical coupling to the contractile filaments and their attachments to protein structures and the skeletal framework of the cell. The two subsystem models are tested independently and compared with data. Likewise, the complete combined cell model responses to voltage pulse stimulation under isometric and isotonic conditions are calculated and compared with measured single cell lengthforce and forcevelocity data obtained from literature. This integrated cell model provides biophysicallybased explanations of electrical, chemical and mechanical phenomena in cerebrovascular smooth muscle, and has considerable utility as an adjunct to laboratory research and experimental design.
Dynamical Modeling of NGC 6809 Selecting the best model using Bayesian Inference ; The precise cosmological origin of globular clusters remains uncertain, a situation hampered by the struggle of observational approaches in conclusively identifying the presence, or not, of dark matter in these systems. In this paper, we address this question through an analysis of the particular case of NGC 6809. While previous studies have performed dynamical modeling of this globular cluster using a small number of available kinematic data, they did not perform appropriate statistical inference tests for the choice of best model description; such statistical inference for model selection is important since, in general, different models can result in significantly different inferred quantities. With the latest kinematic data, we use Bayesian inference tests for model selection and thus obtain the best fitting models, as well as mass and dynamic masstolight ratio estimates. For this, we introduce a new likelihood function that provides more constrained distributions for the defining parameters of dynamical models. Initially we consider models with a known distribution function, and then model the cluster using solutions of the spherically symmetric Jeans equation; this latter approach depends upon the mass density profile and anisotropy beta parameter. In order to find the best description for the cluster we compare these models by calculating their Bayesian evidence. We find smaller mass and dynamic masstolight ratio values than previous studies, with the best fitting Michie model for a constant masstolight ratio of Upsilon 0.900.140.14 and Mtextdyn6.100.510.88 times 104 Modot. We exclude the significant presence of dark matter throughout the cluster, showing that no physically motivated distribution of dark matter can be present away from the cluster core.
Modelling galaxy clustering halo occupation distribution versus subhalo matching ; We model the luminositydependent projected and redshiftspace twopoint correlation functions 2PCFs of the Sloan Digital Sky Survey SDSS DR7 Main galaxy sample, using the halo occupation distribution HOD model and the subhalo abundance matching SHAM model and its extension. All the models are built on the same highresolution Nbody simulations. We find that the HOD model generally provides the best performance in reproducing the clustering measurements in both projected and redshift spaces. The SHAM model with the same halogalaxy relation for central and satellite galaxies or distinct haloes and subhaloes, when including scatters, has a bestfitting chi2rmdof around 23. We therefore extend the SHAM model to the subhalo clustering and abundance matching SCAM by allowing the central and satellite galaxies to have different galaxyhalo relations. We infer the corresponding halosubhalo parameters by jointly fitting the galaxy 2PCFs and abundances and consider subhaloes selected based on three properties, the mass Mrm acc at the time of accretion, the maximum circular velocity Vrm acc at the time of accretion, and the peak maximum circular velocity Vrm peak over the history of the subhaloes. The three subhalo models work well for luminous galaxy samples with luminosity above L. For lowluminosity samples, the Vrm acc model stands out in reproducing the data, with the Vrm peak model slightly worse, while the Mrm acc model fails to fit the data. We discuss the implications of the modeling results.
INFaaS A Modelless and Managed Inference Serving System ; Despite existing work in machine learning inference serving, easeofuse and cost efficiency remain challenges at large scales. Developers must manually search through thousands of modelvariants versions of alreadytrained models that differ in hardware, resource footprints, latencies, costs, and accuracies to meet the diverse application requirements. Since requirements, query load, and applications themselves evolve over time, these decisions need to be made dynamically for each inference query to avoid excessive costs through naive autoscaling. To avoid navigating through the large and complex tradeoff space of modelvariants, developers often fix a variant across queries, and replicate it when load increases. However, given the diversity across variants and hardware platforms in the cloud, a lack of understanding of the tradeoff space can incur significant costs to developers. This paper introduces INFaaS, a managed and modelless system for distributed inference serving, where developers simply specify the performance and accuracy requirements for their applications without needing to specify a specific modelvariant for each query. INFaaS generates modelvariants, and efficiently navigates the large tradeoff space of modelvariants on behalf of developers to meet applicationspecific objectives a for each query, it selects a model, hardware architecture, and model optimizations, b it combines VMlevel horizontal autoscaling with modellevel autoscaling, where multiple, different modelvariants are used to serve queries within each machine. By leveraging diverse variants and sharing hardware resources across models, INFaaS achieves 1.3x higher throughput, violates latency objectives 1.6x less often, and saves up to 21.6x in cost 8.5x on average compared to stateoftheart inference serving systems on AWS EC2.
Machine Learning Models to Predict Inhibition of the Bile Salt Export Pump ; Druginduced liver injury DILI is the most common cause of acute liver failure and a frequent reason for withdrawal of candidate drugs during preclinical and clinical testing. An important type of DILI is cholestatic liver injury, caused by buildup of bile salts within hepatocytes; it is frequently associated with inhibition of bile salt transporters, such as the bile salt export pump BSEP. Reliable in silico models to predict BSEP inhibition directly from chemical structures would significantly reduce costs during drug discovery and could help avoid injury to patients. Unfortunately, models published to date have been insufficiently accurate to encourage wide adoption. We report our development of classification and regression models for BSEP inhibition with substantially improved performance over previously published models. Our model development leveraged the ATOM Modeling PipeLine AMPL developed by the ATOM Consortium, which enabled us to train and evaluate thousands of candidate models. In the course of model development, we assessed a variety of schemes for chemical featurization, dataset partitioning and class labeling, and identified those producing models that generalized best to novel chemical entities. Our best performing classification model was a neural network with ROC AUC 0.88 on our internal test dataset and 0.89 on an independent external compound set. Our best regression model, the first ever reported for predicting BSEP IC50s, yielded a test set R2 0.56 and mean absolute error 0.37, corresponding to a mean 2.3fold error in predicted IC50s, comparable to experimental variation. These models will thus be useful as inputs to mechanistic predictions of DILI and as part of computational pipelines for drug discovery.
Generalization of the Elastic Network model for the study of large conformational changes in biomolecules ; The elastic network EN is a prime model that describes the longtime dynamics of biomolecules. However, the use of harmonic potentials renders this model insufficient for studying large conformational changes of proteins e.g. stretching of proteins, folding and thermal unfolding. Here, we extend the capabilities of the EN model by using a harmonic approximation described by LennardJones LJ interactions for far contacts and native contacts obtained from the standard overlap criterion as in the case of Golike models. While our model is validated against the EN model by reproducing the equilibrium properties for a number of proteins, we also show that the model is suitable for the study of large conformation changes by providing various examples. In particular, this is illustrated on the basis of pulling simulations that predict with high accuracy the experimental data on the rupture force of the studied proteins. Furthermore, in the case of DDFLN4 protein, our pulling simulations highlight the advantages of our model with respect to Golike approaches, where the latter fail to reproduce previous results obtained by allatom simulations that predict an additional characteristic peak for this protein. In addition, folding simulations of small peptides yield different folding times for alphahelix and betahairpin, in agreement with experiment, in this way providing further opportunities for the application of our model in studying large conformational changes of proteins. In contrast to the EN model, our model is suitable for both normal mode analysis and molecular dynamics simulation. We anticipate that the proposed model will find applications in a broad range of problems in biology, including, among others, protein folding and thermal unfolding.
HumanLike Autonomous CarFollowing Model with Deep Reinforcement Learning ; This study proposes a framework for humanlike autonomous carfollowing planning based on deep reinforcement learning deep RL. Historical driving data are fed into a simulation environment where an RL agent learns from trial and error interactions based on a reward function that signals how much the agent deviates from the empirical data. Through these interactions, an optimal policy, or carfollowing model that maps in a humanlike way from speed, relative speed between a lead and following vehicle, and intervehicle spacing to acceleration of a following vehicle is finally obtained. The model can be continuously updated when more data are fed in. Two thousand carfollowing periods extracted from the 2015 Shanghai Naturalistic Driving Study were used to train the model and compare its performance with that of traditional and recent datadriven carfollowing models. As shown by this study results, a deep deterministic policy gradient carfollowing model that uses disparity between simulated and observed speed as the reward function and considers a reaction delay of 1s, denoted as DDPGvRT, can reproduce humanlike carfollowing behavior with higher accuracy than traditional and recent datadriven carfollowing models. Specifically, the DDPGvRT model has a spacing validation error of 18 and speed validation error of 5, which are less than those of other models, including the intelligent driver model, models based on locally weighted regression, and conventional neural networkbased models. Moreover, the DDPGvRT demonstrates good capability of generalization to various driving situations and can adapt to different drivers by continuously learning. This study demonstrates that reinforcement learning methodology can offer insight into driver behavior and can contribute to the development of humanlike autonomous driving algorithms and trafficflow models.
The Quality of the Covariance Selection Through Detection Problem and AUC Bounds ; We consider the problem of quantifying the quality of a model selection problem for a graphical model. We discuss this by formulating the problem as a detection problem. Model selection problems usually minimize a distance between the original distribution and the model distribution. For the special case of Gaussian distributions, the model selection problem simplifies to the covariance selection problem which is widely discussed in literature by Dempster 2 where the likelihood criterion is maximized or equivalently the KullbackLeibler KL divergence is minimized to compute the model covariance matrix. While this solution is optimal for Gaussian distributions in the sense of the KL divergence, it is not optimal when compared with other information divergences and criteria such as Area Under the Curve AUC. In this paper, we analytically compute upper and lower bounds for the AUC and discuss the quality of model selection problem using the AUC and its bounds as an accuracy measure in detection problem. We define the correlation approximation matrix CAM and show that analytical computation of the KL divergence, the AUC and its bounds only depend on the eigenvalues of CAM. We also show the relationship between the AUC, the KL divergence and the ROC curve by optimizing with respect to the ROC curve. In the examples provided, we pick tree structures as the simplest graphical models. We perform simulations on fullyconnected graphs and compute the tree structured models by applying the widely used ChowLiu algorithm 3. Examples show that the quality of tree approximation models are not good in general based on information divergences, the AUC and its bounds when the number of nodes in the graphical model is large. We show both analytically and by simulations that the 1AUC for the tree approximation model decays exponentially as the dimension of graphical model increases.
HighPerformance and Distributed Computing in a Probabilistic Finite Element Comparison Study of the Human Lower Leg Model with Total Knee Replacement ; Reliability theory is used to assess the sensitivity of a passive flexion and active flexion of the human lower leg Finite Element FE models with Total Knee Replacement TKR to the variability in the input parameters of the respective FE models. The sensitivity of the active flexion simulating the stair ascent of the human lower leg FE model with TKR was presented before in 1,2 whereas now in this paper a comparison is made with the passive flexion of the human lower leg FE model with TKR. First, with the Monte Carlo Simulation Technique MCST, a number of randomly generated input data of the FE models are obtained based on the normal standard deviations of the respective input parameters. Then a series of FE simulations are done and the output kinematics and peak contact pressures are obtained for the respective FE models passive flexion andor active flexion models. Seven output performance measures are reported for the passive flexion model and one more parameter was reported for the active flexion FE model patellofemoral peak contact pressure in 1. A sensitivity study will be performed based on the Response Surface Method RSM to identify the key parameters that influence the kinematics and peak contact pressures of the passive flexion FE model. Another two MCST and RSMbased probabilistic FE analyses will be performed based on a reduced list of 19 key input parameters. In total 4 probabilistic FE analyses will be performed 2 probabilistic FE analyses MCST and RSM based on an extended set of 78 input variables and another 2 probabilistic FE analyses MCST and RSM based on a reduced set of 19 input variables. Due to the likely computation cost in order to make hundreds of FE simulations with MCST, a highperformance and distributed computing system will be used for the passive flexion FE model the same as it was used for the active flexion FE model in 1.
Hybrid reactiondiffusion and clockandwavefront model for the arrest of oscillations in the somitogenesis segmentation clock ; The clock and wavefront paradigm is arguably the most widely accepted model for explaining the embryonic process of somitogenesis. According to this model, somitogenesis is based upon the interaction between a genetic oscillator, known as segmentation clock, and a differentiation wavefront, which provides the positional information indicating where each pair of somites is formed. Shortly after the clock and wavefront paradigm was introduced, Meinhardt presented a conceptually different mathematical model for morphogenesis in general, and somitogenesis in particular. Recently, Cotterell et al. rediscovered an equivalent model by systematically enumerating and studying small networks performing segmentation. Cotterell et al. called it a progressive oscillatory reactiondiffusion PORD model. In the Meinhardt PORD model, somitogenesis is driven by shortrange interactions and the posterior movement of the front is a local, emergent phenomenon, which is not controlled by global positional information. With this model, it is possible to explain some experimental observations that are incompatible with the clock and wavefront model. However the MeinhardtPORD model has some important disadvantages of its own. Namely, it is quite sensitive to fluctuations and depends on very specific initial conditions which are not biologically realistic. In this work, we propose an equivalent MeinhardtPORD model, and then amend it to couple it with a wavefront consisting of a receding morphogen gradient. By doing so, we get a hybrid model between the MeinhardtPORD and the clockandwavefront ones, which overcomes most of the deficiencies of the two originating models.
Umathfrakqmathfraksl3 web models Locality, phase diagram and geometrical defects ; We continue investigating the generalisations of geometrical statistical models introduced in 13, in the form of models of webs on the hexagonal lattice H having a Uqsln quantum group symmetry. We focus here on the n3 case of cubic webs, based on the Kuperberg A2 spider, and illustrate its properties by comparisons with the wellknown dilute loop model the n2 case throughout. A local vertexmodel reformulation is exhibited, analogous to the correspondence between the loop model and a threestate vertex model. The n3 representation uses seven states per link of H, displays explicitly the geometrical content of the webs and their Uqsl3 symmetry, and permits us to study the model on a cylinder via a local transfer matrix. A numerical study of the central charge reveals that for each q in mathbbC in the critical regime, q1, the web model possesses a dense and a dilute critical point, just like its loop model counterpart. In the dense qei pi4 case, the n3 webs can be identified with spin interfaces of the critical threestate Potts model defined on the triangular lattice dual to H. We also provide another mapping to a mathbbZ3 spin model on H itself, using a hightemperature expansion. We then discuss the sector structure of the transfer matrix, for generic q, and its relation to defect configurations in both the strip and the cylinder geometries. These defects define the finitesize precursors of electromagnetic operators. This discussion paves the road for a Coulomb gas description of the conformal properties of defect webs, which will form the object of a subsequent paper. Finally, we identify the fractal dimension of critical webs in the qei pi3 case, which is the n3 analogue of the polymer limit in the loop model.
Protein Models Comparator Scalable Bioinformatics Computing on the Google App Engine Platform ; The comparison of computer generated protein structural models is an important element of protein structure prediction. It has many uses including model quality evaluation, selection of the final models from a large set of candidates or optimisation of parameters of energy functions used in templatefree modelling and refinement. Although many protein comparison methods are available online on numerous web servers, they are not well suited for large scale model comparison 1 they operate with methods designed to compare actual proteins, not the models of the same protein, 2 majority of them offer only a single pairwise structural comparison and are unable to scale up to a required order of thousands of comparisons. To bridge the gap between the protein and model structure comparison we have developed the Protein Models Comparator pmcmp. To be able to deliver the scalability on demand and handle large comparison experiments the pmcmp was implemented in the cloud. Protein Models Comparator is a scalable web application for a fast distributed comparison of protein models with RMSD, GDT TS, TMscore and Qscore measures. It runs on the Google App Engine GAE cloud platform and is a showcase of how the emerging PaaS Platform as a Service technology could be used to simplify the development of scalable bioinformatics services. The functionality of pmcmp is accessible through API which allows a full automation of the experiment submission and results retrieval. Protein Models Comparator is free software released on the Affero GNU Public Licence and is available with its source code at httpwww.infobiotics.orgpmcmp This article presents a new web application addressing the need for a largescale modelspecific protein structure comparison and provides an insight into the GAE Google App Engine platform and its usefulness in scientific computing.
Supersymmetric Dark Matter candidates in light of constraints from collider and astroparticle observables ; The Standard Model of particle physics has been strengthened by the recent discovery of the longawaited Higgs boson. The standard cosmological model has met the challenge of the high precision observations in cosmology and astroparticle physics. However these two standard models face both several theoretical issues, such as the naturalness problem in the Higgs sector of the Standard Model, as well as observational issues, in particular the fact that an unknown kind of matter called Dark Matter accounts for the majority of the matter content in our Universe. Attempts to solve such problems have led to the development of New Physics models during the last decades. Supersymmetry is one such model which addresses the finetuning problem in the Higgs sector and provides viable Dark Matter candidates. Current high energy and high precision experiments give many new opportunities to probe the supersymmetric models. It is in this context that this thesis is written. Considering the Minimal Supersymmetric Standard Model MSSM, the simplest supersymmetric extension of the Standard Model of particle physics, and its conventional Dark Matter candidate, the neutralino, it is shown that collider constraints could provide informations on the very early Universe at the inflation area. It is also demonstrated that the Indirect Detection of Dark Matter, despite several drawbacks, can be a powerful technique to probe supersymmetric Dark Matter models. Beyond the MSSM it is shown that unique characteristics of the Dark Matter candidate in the NMSSM could be probed at colliders. The study of a supersymmetric model with an extended gauge symmetry, the UMSSM, is also developed. The features of another Dark Matter candidate of this model, the RightHanded sneutrino, are analysed. More general constraints such as those coming from low energy observables are finally considered in this model.
Impact of Accretion Flow Dynamics on Gasdynamical Black Hole Mass Estimates ; At low redshift, the majority of supermassive black hole SMBH mass estimates are obtained from modeling stellar kinematics or ionized gas dynamics in the vicinity of the galaxy nucleus. For large early type galaxies, stellar kinematics models predict higher masses than gasdynamical models. In the case of M87, this discrepancy is larger than 2 sigma. Critical to gasdynamical modeling is the assumed underlying dynamical state of the gas that it lies on circular Keplerian orbits, potentially with some additional turbulent pressure support. This is inconsistent with models of the gas flow about lowaccretionrate SMBHs and at odds with observations of the Galactic Center. We present a simple model for nonKeplerian gas disks and explore their implications for SMBH mass measurements. We show that a larger central black hole with gas experiencing small amounts of subKeplerian motion can produce velocity curves similar to models that just contain circular Keplerian motions and a lower black hole mass. However, these nonKeplerian models are distinguishable from lowmass Keplerian models primarily through measurements of the velocity dispersion, wherein nonKeplerian models produce higher and narrower peak dispersions. Away from the galaxy center, but still within the circumnuclear gas disk, nonKeplerian models also become distinguishable from Keplerian models via a shift in the velocity curve. The velocity model presented in this paper is capable of resolving the discrepancy between the ionized gas dynamics and stellar kinematics mass estimates, and is applicable to gasdynamical mass estimates of SMBHs in general.
Axisymmetric lattice Boltzmann model for multiphase flows with large density ratio ; In this paper, a novel lattice Boltzmann LB model based on the AllenCahn phasefield theory is proposed for simulating axisymmetric multiphase flows. The most striking feature of the model is that it enables to handle multiphase flows with large density ratio, which are unavailable in all previous axisymmetric LB models. The present model utilizes two LB evolution equations, one of which is used to solve fluid interface, and another is adopted to solve hydrodynamic properties. To simulate axisymmetric multiphase flows effectively, the appropriate source term and equilibrium distribution function are introduced into the LB equation for interface tracking, and simultaneously, a simple and efficient forcing distribution function is also delicately designed in the LB equation for hydrodynamic properties. Unlike many existing LB models, the source and forcing terms of the model arising from the axisymmetric effect include no additional gradients, and consequently, the present model contains only one nonlocal phase field variable, which in this regard is much simpler. We further conducted the ChapmanEnskog analysis to demonstrate the consistencies of our present MRTLB model with the axisymmetric AllenCahn equation and hydrodynamic equations. A series of numerical examples, including static droplet, oscillation of a viscous droplet, breakup of a liquid thread, and bubble rising in a continuous phase, are used to test the performance of the proposed model. It is found that the present model can generate relatively small spurious velocities and can capture interfacial dynamics with higher accuracy than the previously improved axisymmetric LB model. Besides, it is also found that our present numerical results show excellent agreement with analytical solutions or available experimental data for a wide range of density ratios, which highlights the strengths of the proposed model.
Large Deviations in Renewal Models of Statistical Mechanics ; In Ref. 1 the author has recently established sharp large deviation principles for cumulative rewards associated with a discretetime renewal model, supposing that each renewal involves a broadsense reward taking values in a separable Banach space. The renewal model has been there identified with constrained and nonconstrained pinning models of polymers, which amount to Gibbs changes of measure of a classical renewal process. In this paper we show that the constrained pinning model is the common mathematical structure to the PolandScheraga model of DNA denaturation and to some relevant onedimensional lattice models of Statistical Mechanics, such as the FisherFelderhof model of fluids, the WakoSaitoMunozEaton model of protein folding, and the TokarDreyss'e model of strained epitaxy. Then, in the framework of the constrained pinning model, we develop an analytical characterization of the large deviation principles for cumulative rewards corresponding to multivariate deterministic rewards that are uniquely determined by, and at most of the order of magnitude of, the time elapsed between consecutive renewals. In particular, we outline the explicit calculation of the rate functions and successively we identify the conditions that prevent them from being analytic and that underlie affine stretches in their graphs. Finally, we apply the general theory to the number of renewals. From the point of view of Equilibrium Statistical Physics and Statistical Mechanics, cumulative rewards of the above type are the extensive observables that enter the thermodynamic description of the system. The number of renewals, which turns out to be the commonly adopted order parameter for the PolandScheraga model and for also the renewal models of Statistical Mechanics, is one of these observables.
A hybrid gravity and route choice model to assess vector traffic in largescale road networks ; Human traffic along roads can be a major vector for infectious diseases and invasive species. Though most road traffic is local, a small number of longdistance trips can suffice to move an invasion or disease front forward. Therefore, understanding how many agents travel over long distances and which routes they choose is key to successful management of diseases and invasions. Stochastic gravity models have been used to estimate the distribution of trips between origins and destinations of agents. However, in largescale systems it is hard to collect the data required to fit these models, as the number of longdistance travellers is small, and origins and destinations can have multiple access points. Therefore, gravity models often provide only relative measures of the agent flow. Furthermore, gravity models yield no insights into which roads agents use. We resolve these issues by combining a stochastic gravity model with a stochastic route choice model. Our hybrid model can be fitted to survey data collected at roads that are used by many longdistance travellers. This decreases the sampling effort, allows us to obtain absolute predictions of both vector pressure and pathways, and permits rigorous model validation. After introducing our approach in general terms, we demonstrate its benefits by applying it to the potential invasion of zebra and quagga mussels Dreissena spp. to the Canadian province British Columbia. The model yields an Rsquared value of 0.73 for variancecorrected agent counts at survey locations.
Kalibre Knowledgebased Neural Surrogate Model Calibration for Data Center Digital Twins ; Computational fluid dynamics CFD model has been widely used for prototyping data centers. Evolving it to highfidelity em digital twin is desirable for the management and operations of largescale data centers. Manually calibrating CFD model parameters to achieve twinclass fidelity by specially trained domain expert is tedious and laborintensive. To reduce manual efforts, existing automatic calibration approaches developed for various computational models apply heuristics to search model configurations within an empirically defined parameter bound. However, in the context of CFD, each search step requires longlasting CFD model's iterated solving, rendering these approaches impractical with increased model complexity. This paper presents Kalibre, a knowledgebased neural surrogate approach that performs CFD model calibration by iterating four key steps of i training a neural surrogate model based on CFDgenerated data, ii finding the optimal parameters at the moment through neural surrogate retraining based on sensormeasured data, iii configuring the found parameters back to the CFD model, and iv validating the CFD model using sensormeasured data as the ground truth. Thus, the parameter search is offloaded to the neural surrogate which is ultrafaster than CFD model's iterated solving. To speed up the convergence of Kalibre, we integrate prior knowledge of the twinned data center's thermophysics into the neural surrogate design to improve its learning efficiency. With about five hours computation on a 32core processor, Kalibre achieves mean absolute errors MAEs of 0.81oC and 0.75oC in calibrating two CFD models for two production data halls hosting thousands of servers each while requires fewer CFD solving processes than existing baseline approaches.
Multilabel learning for dynamic model type recommendation ; Dynamic selection techniques aim at selecting the local experts around each test sample in particular for performing its classification. While generating the classifier on a local scope may make it easier for singling out the locally competent ones, as in the online local pool OLP technique, using the same baseclassifier model in uneven distributions may restrict the local level of competence, since each region may have a data distribution that favors one model over the others. Thus, we propose in this work a problemindependent dynamic baseclassifier model recommendation for the OLP technique, which uses information regarding the behavior of a portfolio of models over the samples of different problems to recommend one or several of them on a perinstance manner. Our proposed framework builds a multilabel metaclassifier responsible for recommending a set of relevant model types based on the local data complexity of the region surrounding each test sample. The OLP technique then produces a local pool with the model that yields the highest probability score of the metaclassifier. Experimental results show that different data distributions favored different model types on a local scope. Moreover, based on the performance of an ideal model type selector, it was observed that there is a clear advantage in choosing a relevant model type for each test instance. Overall, the proposed model type recommender system yielded a statistically similar performance to the original OLP with fixed baseclassifier model. Given the novelty of the approach and the gap in performance between the proposed framework and the ideal selector, we regard this as a promising research direction. Code available at github.commarianaasouzadynamicmodelrecommender.
Estimating required 'lockdown' cycles before immunity to SARSCoV2 Modelbased analyses of susceptible population sizes, 'S0', in seven European countries including the UK and Ireland ; We used Bayesian model inversion to estimate epidemic parameters from the reported case and death rates from seven countries using data from late January 2020 to April 5th 2020. Two distinct generative model types were employed first a continuous time dynamicalsystems implementation of a SusceptibleExposedInfectiousRecovered SEIR model and second a partially observable Markov Decision Process MDP or hidden Markov model HMM implementation of an SEIR model. Both models parameterise the size of the initial susceptible population S0, as well as epidemic parameters. Parameter estimation data fitting was performed using a standard Bayesian scheme variational Laplace designed to allow for latent unobservable states and uncertainty in model parameters. Both models recapitulated the dynamics of transmissions and disease as given by case and death rates. The peaks of the current waves were predicted to be in the past for four countries Italy, Spain, Germany and Switzerland and to emerge in 0.52 weeks in Ireland and 13 weeks in the UK. For France one model estimated the peak within the past week and the other in the future in two weeks. Crucially, Maximum a posteriori MAP estimates of S0 for each country indicated effective population sizes of below 20 of total population size, under both the continuous time and HMM models. With a Bayesian weighted average across all seven countries and both models, we estimated that 6.4 of the total population would be immune. From the two models the maximum percentage of the effective population was estimated at 19.6 of the total population for the UK, 16.7 for Ireland, 11.4 for Italy, 12.8 for Spain, 18.8 for France, 4.7 for Germany and 12.9 for Switzerland. Our results indicate that after the current wave, a large proportion of the total population will remain without immunity.
Prospective Prediction of Future SARSCoV2 Infections Using Empirical Data on a National Level to Gauge Response Effectiveness ; Predicting an accurate expected number of future COVID19 cases is essential to properly evaluate the effectiveness of any treatment or preventive measure. This study aimed to identify the most appropriate mathematical model to prospectively predict the expected number of cases without any intervention. The total number of cases for the COVID19 epidemic in 28 countries was analyzed and fitted to several simple rate models including the logistic, Gompertz, quadratic, simple square, and simple exponential growth models. The resulting model parameters were used to extrapolate predictions for more recent data. While the Gompertz growth models mean R2 0.998 best fitted the current data, uncertainties in the eventual case limit made future predictions with logistic models prone to errors. Of the other models, the quadratic rate model mean R2 0.992 fitted the current data best for 25 89 countries as determined by R2 values. The simple square and quadratic models accurately predicted the number of future total cases 37 and 36 days in advance respectively, compared to only 15 days for the simple exponential model. The simple exponential model significantly overpredicted the total number of future cases while the quadratic and simple square models did not. These results demonstrated that accurate future predictions of the case load in a given country can be made significantly in advance without the need for complicated models of population behavior and generate a reliable assessment of the efficacy of current prescriptive measures against disease spread.
SUBPLEX Towards a Better Understanding of Black Box Model Explanations at the Subpopulation Level ; Understanding the interpretation of machine learning ML models has been of paramount importance when making decisions with societal impacts such as transport control, financial activities, and medical diagnosis. While current model interpretation methodologies focus on using locally linear functions to approximate the models or creating selfexplanatory models that give explanations to each input instance, they do not focus on model interpretation at the subpopulation level, which is the understanding of model interpretations across different subset aggregations in a dataset. To address the challenges of providing explanations of an ML model across the whole dataset, we propose SUBPLEX, a visual analytics system to help users understand blackbox model explanations with subpopulation visual analysis. SUBPLEX is designed through an iterative design process with machine learning researchers to address three usage scenarios of reallife machine learning tasks model debugging, feature selection, and bias detection. The system applies novel subpopulation analysis on ML model explanations and interactive visualization to explore the explanations on a dataset with different levels of granularity. Based on the system, we conduct user evaluation to assess how understanding the interpretation at a subpopulation level influences the sensemaking process of interpreting ML models from a user's perspective. Our results suggest that by providing model explanations for different groups of data, SUBPLEX encourages users to generate more ingenious ideas to enrich the interpretations. It also helps users to acquire a tight integration between programming workflow and visual analytics workflow. Last but not least, we summarize the considerations observed in applying visualization to machine learning interpretations.
Evolution of dissipative and nondissipative universes in holographic cosmological models with a powerlaw term ; Density perturbations related to structure formations are expected to be different in dissipative and nondissipative universes, even if the background evolution of the two universes is the same. To clarify the difference between the two universes, firstorder density perturbations are studied, using two types of holographic cosmological models. The first type is a Lambdat model similar to a timevarying Lambdat cosmology for the nondissipative universe. The second type is a BV model similar to a bulk viscous cosmology for the dissipative universe. To systematically examine the two different universes, a powerlaw term proportional to Halpha is applied to the Lambdat and BV bulkviscouscosmologylike models, assuming a flat FriedmannRobertsonWalker model for the late universe. Here, H is the Hubble parameter and alpha is a free parameter whose value is a real number. The LambdatHalpha and BVHalpha models are used to examine firstorder density perturbations for matter, in which the background evolution of the two models is equivalent. In addition, thermodynamic constraints on the two models are discussed, with a focus on the maximization of entropy on the horizon of the universe, extending previous analyses Phys. Rev. D 100, 123545 2019 arXiv1911.08306; 102, 063512 2020 arXiv2006.09650. Consequently, the LambdatHalpha model for small alpha values is found to be consistent with observations and satisfies the thermodynamic constraints, compared with the BVHalpha model. The results show that the nondissipative universe described by the LambdatHalpha model similar to lambda cold dark matter models is likely favored.
A cardiac electromechanics model coupled with a lumped parameters model for closedloop blood circulation. Part I model derivation ; We propose an integrated electromechanical model of the human heart, with focus on the left ventricle, wherein biophysically detailed models describe the different physical phenomena concurring to the cardiac function. We model the subcellular generation of active force by means of an Artificial Neural Network, which is trained by a suitable Machine Learning algorithm from a collection of precomputed numerical simulations of a biophysically detailed, yet computational demanding, highfidelity model. To provide physiologically meaningful results, we couple the 3D electromechanical model with a closedloop 0D lumped parameters model describing the blood circulation in the whole cardiovascular network. We prove that the 3D0D coupling of the two models is compliant with the principle of energy conservation, which is achieved in virtue of energyconsistent boundary conditions that account for the interaction among cardiac chambers within the computational domain, pericardium and surrounding tissue. We thus derive an overall balance of mechanical energy for the 3D0D model. This provides a quantitative insight into the energy utilization, dissipation and transfer among the different compartments of the cardiovascular network and during different stages of the heartbeat. In virtue of this new model and the energy balance, we propose a new validation tool of heart energy usage against relationships used in the daily clinical practice. Finally, we provide a mathematical formulation of an inverse problem aimed at recovering the reference configuration of one or multiple cardiac chambers, starting from the stressed configuration acquired from medical imaging. This is fundamental to correctly initialize electromechanical simulations. Numerical methods and simulations of the 3D0D model will be detailed in Part II.
On the potential of sequential and nonsequential regression models for Sentinel1based biomass prediction in Tanzanian miombo forests ; This study derives regression models for aboveground biomass AGB estimation in miombo woodlands of Tanzania that utilise the high availability and low cost of Sentinel1 data. The limited forest canopy penetration of Cband SAR sensors along with the sparseness of available ground truth restrict their usefulness in traditional AGB regression models. Therefore, we propose to use AGB predictions based on airborne laser scanning ALS data as a surrogate response variable for SAR data. This dramatically increases the available training data and opens for flexible regression models that capture finescale AGB dynamics. This becomes a sequential modelling approach, where the first regression stage has linked in situ data to ALS data and produced the AGB prediction map; We perform the subsequent stage, where this map is related to Sentinel1 data. We develop a traditional, parametric regression model and alternative nonparametric models for this stage. The latter uses a conditional generative adversarial network cGAN to translate Sentinel1 images into ALSbased AGB prediction maps. The convolution filters in the neural networks make them contextual. We compare the sequential models to traditional, nonsequential regression models, all trained on limited AGB ground reference data. Results show that our newly proposed nonsequential Sentinel1based regression model performs better quantitatively than the sequential models, but achieves less sensitivity to finescale AGB dynamics. The contextual cGANbased sequential models best reproduce the distribution of ALSbased AGB predictions. They also reach a lower RMSE against in situ AGB data than the parametric sequential model, indicating a potential for further development.
Fast and accurate waveform modeling of longhaul multichannel optical fiber transmission using a hybrid modeldata driven scheme ; The modeling of optical wave propagation in optical fiber is a task of fast and accurate solving the nonlinear Schrodinger equation NLSE, and can enable the optical system design, digital signal processing verification and fast waveform calculation. Traditional waveform modeling of fulltime and fullfrequency information is the splitstep Fourier method SSFM, which has long been regarded as challenging in longhaul wavelength division multiplexing WDM optical fiber communication systems because it is extremely timeconsuming. Here we propose a linearnonlinear feature decoupling distributed FDD waveform modeling scheme to model longhaul WDM fiber channel, where the channel linear effects are modelled by the NLSEderived modeldriven methods and the nonlinear effects are modelled by the datadriven deep learning methods. Meanwhile, the proposed scheme only focuses on onespan fiber distance fitting, and then recursively transmits the model to achieve the required transmission distance. The proposed modeling scheme is demonstrated to have high accuracy, high computing speeds, and robust generalization abilities for different optical launch powers, modulation formats, channel numbers and transmission distances. The total running time of FDD waveform modeling scheme for 41channel 1040km fiber transmission is only 3 minutes versus more than 2 hours using SSFM for each input condition, which achieves a 98 reduction in computing time. Considering the multiround optimization by adjusting system parameters, the complexity reduction is significant. The results represent a remarkable improvement in nonlinear fiber modeling and open up novel perspectives for solution of NLSElike partial differential equations and optical fiber physics problems.
Shuffle Private Linear Contextual Bandits ; Differential privacy DP has been recently introduced to linear contextual bandits to formally address the privacy concerns in its associated personalized services to participating users e.g., recommendations. Prior work largely focus on two trust models of DP the central model, where a central server is responsible for protecting users sensitive data, and the stronger local model, where information needs to be protected directly on user side. However, there remains a fundamental gap in the utility achieved by learning algorithms under these two privacy models, e.g., tildeOsqrtT regret in the central model as compared to tildeOT34 regret in the local model, if all users are unique within a learning horizon T. In this work, we aim to achieve a stronger model of trust than the central model, while suffering a smaller regret than the local model by considering recently popular shuffle model of privacy. We propose a general algorithmic framework for linear contextual bandits under the shuffle trust model, where there exists a trusted shuffler in between users and the central server, that randomly permutes a batch of users data before sending those to the server. We then instantiate this framework with two specific shuffle protocols one relying on privacy amplification of local mechanisms, and another incorporating a protocol for summing vectors and matrices of bounded norms. We prove that both these instantiations lead to regret guarantees that significantly improve on that of the local model, and can potentially be of the order tildeOT35 if all users are unique. We also verify this regret behavior with simulations on synthetic data. Finally, under the practical scenario of nonunique users, we show that the regret of our shuffle private algorithm scale as tildeOT23, which matches that the central model could achieve in this case.
Modeling the propagation of tumor fronts with shortest path and diffusion models implications for the definition of the clinical target volume ; Objective The overarching objective is to make the definition of the clinical target volume CTV in radiation oncology less subjective and more scientifically based. The specific objective of this study is to investigate similarities and differences between two methods that model tumor spread beyond the visible gross tumor volume GTV 1. The shortest path model, which is the standard method of adding a geometric GTVCTV margin, and 2. The reactiondiffusion model. Approach These two models to capture the invisible tumor fire front are defined and compared in mathematical terms. The models are applied to example cases that represent tumor spread in nonuniform and anisotropic media with anatomical barriers. Main Results The two seemingly disparate models bring forth traveling waves that can be associated with the front of tumor growth outward from the GTV. The shape of the fronts is similar for both models. Differences are seen in cases where the diffusive flow is reduced due to anatomical barriers, and in complex spatially nonuniform cases. The diffusion model generally leads to smoother fronts. The smoothness can be controlled with a parameter defined by the ratio of the diffusion coefficient and the proliferation rate. Significance Defining the CTV has been described as the weakest link of the radiotherapy chain. There are many similarities in the mathematical description and the behavior of the common geometric GTVCTV expansion method, and the definition of the CTV tumor front via the reactiondiffusion model. Its mechanistic basis and the controllable smoothness make the diffusion model an attractive alternative to the standard GTVCTV margin model.
Model Joins Enabling Analytics Over Joins of Absent Big Tables ; This work is motivated by two key facts. First, it is highly desirable to be able to learn and perform knowledge discovery and analytics LKD tasks without the need to access rawdata tables. This may be due to organizations finding it increasingly frustrating and costly to manage and maintain evergrowing tables, or for privacy reasons. Hence, compact models can be developed from the raw data and used instead of the tables. Second, oftentimes, LKD tasks are to be performed on a potentially very large table which is itself the result of joining separate potentially very large relational tables. But how can one do this, when the individual tobejoined tables are absent Here, we pose the following fundamental questions Q1 How can one join models of absentdeleted tables or join models with other tables in a way that enables LKD as if it were performed on the join of the actual raw tables Q2 What are appropriate models to use per table Q3 As the model join would be an approximation of the actual data join, how can one evaluate the quality of the model join result This work puts forth a framework, Model Join, addressing these challenges. The framework integrates and joins the pertable models of the absent tables and generates a uniform and independent sample that is a highquality approximation of a uniform and independent sample of the actual rawdata join. The approximation stems from the models, but not from the Model Join framework. The sample obtained by the Model Join can be used to perform LKD downstream tasks, such as approximate query processing, classification, clustering, regression, association rule mining, visualization, and so on. To our knowledge, this is the first work with this agenda and solutions. Detailed experiments with TPCDS data and synthetic data showcase Model Join's usefulness.
Guaranteed Conformance of Neurosymbolic Models to Natural Constraints ; Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such datadriven models are in turn used for designing and verifying autonomous systems. They are particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safetycritical applications, it is important that the datadriven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a possibly blackbox model. For instance, an F1 racing car should conform to Newton's laws which are encoded within a unicycle model. In this light, we consider the following problem given a model M and a state transition dataset, we wish to best approximate the system model while being a bounded distance away from M. We propose a method to guarantee this conformance. Our first step is to distill the dataset into a few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network in each subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to a bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies Car Model, Drones, and Artificial Pancreas, our constrained neurosymbolic models conform to specified models each encoding various constraints with orderofmagnitude improvements compared to the augmented Lagrangian and vanilla training methods. Our code can be found at httpsgithub.comkaustubhsridharConstrainedModels
Cosmological Tests of fR,G,mathcalT Dark Energy Model in FRW Universe ; This research article presents a new cosmological model formulated within the fR,G,mathcalT framework, focusing on the observational signatures and parameter constraints of the model. The Markov Chain Monte Carlo MCMC technique is employed to effectively explore the parameter space using data from 36 Cosmic Chronometers and 1701 Pantheon Plus data points. A comparative analysis is conducted between the proposed fR,G,mathcalT model and the widely accepted LambdaCDM model, considering various cosmological parameters, such as Deceleration, Snap, and Jerk. By evaluating these parameters, valuable insights into the dynamics and evolution of the universe within the context of the new model are obtained. Diagnostic tests including Statefinder and Om Diagnostic are performed to further investigate the behavior and consistency of the fR,G,mathcalT model. These tests provide deeper insights into the properties of the model and its compatibility with observational data. The model is subjected to statistical analysis using Information Criteria to rigorously assess its goodness of fit to the data. This analysis helps determine the level of agreement between the fR,G,mathcalT model and the observational data, establishing the viability and reliability of the proposed cosmological framework. The results highlight the potential of the fR,G,mathcalT framework in understanding the fundamental aspects of the universe's evolution and dynamics. The comparative analysis with the LambdaCDM model, along with the comprehensive diagnostic tests performed, demonstrates the efficacy and validity of the fR,G,mathcalT model in explaining observed cosmological phenomena. These findings contribute to the ongoing pursuit of accurate and comprehensive models that provide a deeper understanding of the nature of our universe.
Cosmic Microwave Anisotropies from Topological Defects in an Open Universe ; We present a general formalism for computing Cosmic Background Radiation CBR and density fluctuations in open models with stiff sources. We find analytic Green's functions for the linearized Einstein equations in the presence of stiff sources and use this formalism to estimate the amplitude and harmonic spectrum of microwave background fluctuations produced by topological defects in an open universe. Unlike inflationary models that predict a flat universe and a spectrum of CBR fluctuations that is enhanced at large angular scales, defect models predict that CBR fluctuations are suppressed on angular scales larger than that subtended by the curvature scale. In an Omega 0.2 0.4 universe, these models, when normalized to the amplitude of CBR fluctuations observed by COBE, require a moderate bias factor, 23, to be compatible with the observed fluctuations in galaxy counts. In these models, accurate predictions can be made which are testable through CBR experiments in the near future. A CBR measurement of Omega would then be possible, up to the limit imposed by cosmic variance. We discuss some of the philosophical implications of an open model and propose a solution to the flatness problem.