text
stringlengths
62
2.94k
Latent Diffusion Models for Structural Component Design ; Recent advances in generative modeling, namely Diffusion models, have revolutionized generative modeling, enabling highquality image generation tailored to user needs. This paper proposes a framework for the generative design of structural components. Specifically, we employ a Latent Diffusion model to generate potential designs of a component that can satisfy a set of problemspecific loading conditions. One of the distinct advantages our approach offers over other generative approaches, such as generative adversarial networks GANs, is that it permits the editing of existing designs. We train our model using a dataset of geometries obtained from structural topology optimization utilizing the SIMP algorithm. Consequently, our framework generates inherently nearoptimal designs. Our work presents quantitative results that support the structural performance of the generated designs and the variability in potential candidate designs. Furthermore, we provide evidence of the scalability of our framework by operating over voxel domains with resolutions varying from 323 to 1283. Our framework can be used as a starting point for generating novel nearoptimal designs similar to topologyoptimized designs.
EvoText Enhancing Natural Language Generation Models via SelfEscalation Learning for UptoDate Knowledge and Improved Performance ; In recent years, pretrained models have been widely used in various fields, including natural language understanding, computer vision, and natural language generation. However, the performance of these language generation models is highly dependent on the model size and the dataset size. While larger models excel in some aspects, they cannot learn uptodate knowledge and are relatively difficult to relearn. In this paper, we introduce EvoText, a novel training method that enhances the performance of any natural language generation model without requiring additional datasets during the entire training process although a prior dataset is necessary for pretraining. EvoText employs two models G, a text generation model, and D, a model that can determine whether the data generated by G is legitimate. Initially, the finetuned D model serves as the knowledge base. The text generated by G is then input to D to determine whether it is legitimate. Finally, G is finetuned based on D's output. EvoText enables the model to learn uptodate knowledge through a selfescalation process that builds on a priori knowledge. When EvoText needs to learn something new, it simply finetunes the D model. Our approach applies to autoregressive language modeling for all Transformer classes. With EvoText, eight models achieved stable improvements in seven natural language processing tasks without any changes to the model structure.
Modelling and analysis of rank ordered data with ties via a generalized PlackettLuce model ; A simple generative model for rank ordered data with ties is presented. The model is based on ordering geometric latent variables and can be seen as the discrete counterpart of the PlackettLuce PL model, a popular, relatively tractable model for permutations. The model, which will be referred to as the GPL model, for generalized or geometric PlackettLuce model, contains the PL model as a limiting special case. A closed form expression for the likelihood is derived. With a focus on Bayesian inference via data augmentation, simple Gibbs sampling and EM algorithms are derived for both the general case of multiple comparisons and the special case of paired comparisons. The methodology is applied to several real data examples. The examples highlight the flexibility of the GPL model to cope with a range of data types, the simplicity and efficiency of the inferential algorithms, and the ability of the GPL model to naturally facilitate predictive inference due to its simple generative construction.
Knowledge Distillation of Large Language Models ; Knowledge Distillation KD is a promising technique for reducing the high computational demand of large language models LLMs. However, previous KD methods are primarily applied to whitebox classification models or training small models to imitate blackbox model APIs like ChatGPT. How to effectively distill the knowledge from whitebox generative LLMs is still underexplored, which becomes more and more important with the prosperity of LLMs. In this work, we propose MiniLLM that distills smaller language models from generative larger language models. We first replace the forward KullbackLeibler divergence KLD objective in the standard KD approaches with reverse KLD, which is more suitable for KD on generative language models, to prevent the student model from overestimating the lowprobability regions of the teacher distribution. Then, we derive an effective optimization approach to learn this objective. Extensive experiments in the instructionfollowing setting show that the MiniLLM models generate more precise responses with the higher overall quality, lower exposure bias, better calibration, and higher longtext generation performance. Our method is also scalable for different model families with 120M to 13B parameters. We will release our code and model checkpoints at httpsaka.msMiniLLM.
Unlocking Model Insights A Dataset for Automated Model Card Generation ; Language models LMs are no longer restricted to ML community, and instructiontuned LMs have led to a rise in autonomous AI agents. As the accessibility of LMs grows, it is imperative that an understanding of their capabilities, intended usage, and development cycle also improves. Model cards are a popular practice for documenting detailed information about an ML model. To automate model card generation, we introduce a dataset of 500 questionanswer pairs for 25 ML models that cover crucial aspects of the model, such as its training configurations, datasets, biases, architecture details, and training resources. We employ annotators to extract the answers from the original paper. Further, we explore the capabilities of LMs in generating model cards by answering questions. Our initial experiments with ChatGPT3.5, LLaMa, and Galactica showcase a significant gap in the understanding of research papers by these aforementioned LMs as well as generating factual textual responses. We posit that our dataset can be used to train models to automate the generation of model cards from paper text and reduce human effort in the model card curation process. The complete dataset is available on httpsosf.iohqt7pviewonly3b9114e3904c4443bcd9f5c270158d37
Continual Learning of Generative Models with Limited Data From Wasserstein1 Barycenter to Adaptive Coalescence ; Learning generative models is challenging for a network edge node with limited data and computing power. Since tasks in similar environments share model similarity, it is plausible to leverage pretrained generative models from the cloud or other edge nodes. Appealing to optimal transport theory tailored towards Wasserstein1 generative adversarial networks WGAN, this study aims to develop a framework which systematically optimizes continual learning of generative models using local data at the edge node while exploiting adaptive coalescence of pretrained generative models. Specifically, by treating the knowledge transfer from other nodes as Wasserstein balls centered around their pretrained models, continual learning of generative models is cast as a constrained optimization problem, which is further reduced to a Wasserstein1 barycenter problem. A twostage approach is devised accordingly 1 The barycenters among the pretrained models are computed offline, where displacement interpolation is used as the theoretic foundation for finding adaptive barycenters via a recursive WGAN configuration; 2 the barycenter computed offline is used as metamodel initialization for continual learning and then fast adaptation is carried out to find the generative model using the local samples at the target edge node. Finally, a weight ternarization method, based on joint optimization of weights and threshold for quantization, is developed to compress the generative model further.
ProphetNetX LargeScale Pretraining Models for English, Chinese, Multilingual, Dialog, and Code Generation ; Now, the pretraining technique is ubiquitous in natural language processing field. ProphetNet is a pretraining based natural language generation method which shows powerful performance on English text summarization and question generation tasks. In this paper, we extend ProphetNet into other domains and languages, and present the ProphetNet family pretraining models, named ProphetNetX, where X can be English, Chinese, Multilingual, and so on. We pretrain a crosslingual generation model ProphetNetMulti, a Chinese generation model ProphetNetZh, two opendomain dialog generation models ProphetNetDialogEn and ProphetNetDialogZh. And also, we provide a PLG Programming Language Generation model ProphetNetCode to show the generation performance besides NLG Natural Language Generation tasks. In our experiments, ProphetNetX models achieve new stateoftheart performance on 10 benchmarks. All the models of ProphetNetX share the same model structure, which allows users to easily switch between different models. We make the code and models publicly available, and we will keep updating more pretraining models and finetuning scripts.
Global Context with Discrete Diffusion in Vector Quantised Modelling for Image Generation ; The integration of Vector Quantised Variational AutoEncoder VQVAE with autoregressive models as generation part has yielded highquality results on image generation. However, the autoregressive models will strictly follow the progressive scanning order during the sampling phase. This leads the existing VQ series models to hardly escape the trap of lacking global information. Denoising Diffusion Probabilistic Models DDPM in the continuous domain have shown a capability to capture the global context, while generating highquality images. In the discrete state space, some works have demonstrated the potential to perform text generation and low resolution image generation. We show that with the help of a contentrich discrete visual codebook from VQVAE, the discrete diffusion model can also generate high fidelity images with global context, which compensates for the deficiency of the classical autoregressive model along pixel space. Meanwhile, the integration of the discrete VAE with the diffusion model resolves the drawback of conventional autoregressive models being oversized, and the diffusion model which demands excessive time in the sampling process when generating images. It is found that the quality of the generated images is heavily dependent on the discrete visual codebook. Extensive experiments demonstrate that the proposed Vector Quantised Discrete Diffusion Model VQDDM is able to achieve comparable performance to toptier methods with low complexity. It also demonstrates outstanding advantages over other vectors quantised with autoregressive models in terms of image inpainting tasks without additional training.
Generative NoisyLabel Learning by Implicit Dicriminative Approximation with Partial Label Prior ; The learning with noisy labels has been addressed with both discriminative and generative models. Although discriminative models have dominated the field due to their simpler modeling and more efficient computational training processes, generative models offer a more effective means of disentangling clean and noisy labels and improving the estimation of the label transition matrix. However, generative approaches maximize the joint likelihood of noisy labels and data using a complex formulation that only indirectly optimizes the model of interest associating data and clean labels. Additionally, these approaches rely on generative models that are challenging to train and tend to use uninformative clean label priors. In this paper, we propose a new generative noisylabel learning approach that addresses these three issues. First, we propose a new model optimisation that directly associates data and clean labels. Second, the generative model is implicitly estimated using a discriminative model, eliminating the inefficient training of a generative model. Third, we propose a new informative label prior inspired by partial label learning as supervision signal for noisy label learning. Extensive experiments on several noisylabel benchmarks demonstrate that our generative model provides stateoftheart results while maintaining a similar computational complexity as discriminative models.
Linguistics Computation, Automatic Model Generation, and Intensions ; Techniques are presented for defining models of computational linguistics theories. The methods of generalized diagrams that were developed by this author for modeling artificial intelligence planning and reasoning are shown to be applicable to models of computation of linguistics theories. It is shown that for extensional and intensional interpretations, models can be generated automatically which assign meaning to computations of linguistics theories for natural languages. Keywords Computational Linguistics, Reasoning Models, Gdiagrams For Models, Dynamic Model Implementation, Linguistics and Logics For Artificial Intelligence
Improved Classification Based on Deep Belief Networks ; For better classification generative models are used to initialize the model and model features before training a classifier. Typically it is needed to solve separate unsupervised and supervised learning problems. Generative restricted Boltzmann machines and deep belief networks are widely used for unsupervised learning. We developed several supervised models based on DBN in order to improve this twophase strategy. Modifying the loss function to account for expectation with respect to the underlying generative model, introducing weight bounds, and multilevel programming are applied in model development. The proposed models capture both unsupervised and supervised objectives effectively. The computational study verifies that our models perform better than the twophase training approach.
Analytical Formulation of the BlockConstrained Configuration Model ; We provide a novel family of generative blockmodels for random graphs that naturally incorporates degree distributions the blockconstrained configuration model. Blockconstrained configuration models build on the generalised hypergeometric ensemble of random graphs and extend the wellknown configuration model by enforcing blockconstraints on the edge generation process. The resulting models are analytically tractable and practical to fit even to large networks. These models provide a new, flexible tool for the study of community structure and for network science in general, where modelling networks with heterogeneous degree distributions is of central importance.
Application of the Generalized Linear Models in Actuarial Framework ; This paper aims to review the methodology behind the generalized linear models which are used in analyzing the actuarial situations instead of the ordinary multiple linear regression. We introduce how to assess the adequacy of the model which includes comparing nested models using the deviance and the scaled deviance. The Akiake information criterion is proposed as a comprehensive tool for selecting the adequate model. We model a simple automobile portfolio using the generalized linear models, and use the best chosen model to predict the number of claims made by the policyholders in the portfolio.
A general approach to the exact localized transition points of 1D mosaic disorder models ; In this paper, we present a general correspondence between the mosaic and nonmosaic models, which can be used to obtain the exact solution for the mosaic ones. This relation holds not only for the quasicrystal models, but also for the Anderson models. Despite the different localization properties of the specific models, this relationship shares a unified form. Applying our method to the mosaic Anderson models, we find that there is a discrete set of extended states. At last, we also give the general analytical mobility edge for the mosaic slowly varying potential models and the mosaic GaneshanPixleyDas Sarma models.
Generalized models as a universal approach to the analysis of nonlinear dynamical systems ; We present a universal approach to the investigation of the dynamics in generalized models. In these models the processes that are taken into account are not restricted to specific functional forms. Therefore a single generalized models can describe a class of systems which share a similar structure. Despite this generality, the proposed approach allows us to study the dynamical properties of generalized models efficiently in the framework of local bifurcation theory. The approach is based on a normalization procedure that is used to identify natural parameters of the system. The Jacobian in a steady state is then derived as a function of these parameters. The analytical computation of local bifurcations using computer algebra reveals conditions for the local asymptotic stability of steady states and provides certain insights on the global dynamics of the system. The proposed approach yields a close connection between modelling and nonlinear dynamics. We illustrate the investigation of generalized models by considering examples from three different disciplines of science a socioeconomic model of dynastic cycles in china, a model for a coupled laser system and a general ecological food web.
PeriodNet A nonautoregressive waveform generation model with a structure separating periodic and aperiodic components ; We propose PeriodNet, a nonautoregressive nonAR waveform generation model with a new model structure for modeling periodic and aperiodic components in speech waveforms. The nonAR waveform generation models can generate speech waveforms parallelly and can be used as a speech vocoder by conditioning an acoustic feature. Since a speech waveform contains periodic and aperiodic components, both components should be appropriately modeled to generate a highquality speech waveform. However, it is difficult to decompose the components from a natural speech waveform in advance. To address this issue, we propose a parallel model and a series model structure separating periodic and aperiodic components. The features of our proposed models are that explicit periodic and aperiodic signals are taken as input, and external periodicaperiodic decomposition is not needed in training. Experiments using a singing voice corpus show that our proposed structure improves the naturalness of the generated waveform. We also show that the speech waveforms with a pitch outside of the training data range can be generated with more naturalness.
GPHD Using Genetic Programming to Generate Dynamical Systems Models for Health Care ; The huge wealth of data in the health domain can be exploited to create models that predict development of health states over time. Temporal learning algorithms are well suited to learn relationships between health states and make predictions about their future developments. However, these algorithms 1 either focus on learning one generic model for all patients, providing general insights but often with limited predictive performance, or 2 learn individualized models from which it is hard to derive generic concepts. In this paper, we present a middle ground, namely parameterized dynamical systems models that are generated from data using a Genetic Programming GP framework. A fitness function suitable for the health domain is exploited. An evaluation of the approach in the mental health domain shows that performance of the model generated by the GP is on par with a dynamical systems model developed based on domain knowledge, significantly outperforms a generic Long Term Short Term Memory LSTM model and in some cases also outperforms an individualized LSTM model.
Adversarial Robustness of FlowBased Generative Models ; Flowbased generative models leverage invertible generator functions to fit a distribution to the training data using maximum likelihood. Despite their use in several application domains, robustness of these models to adversarial attacks has hardly been explored. In this paper, we study adversarial robustness of flowbased generative models both theoretically for some simple models and empirically for more complex ones. First, we consider a linear flowbased generative model and compute optimal samplespecific and universal adversarial perturbations that maximally decrease the likelihood scores. Using this result, we study the robustness of the wellknown adversarial training procedure, where we characterize the fundamental tradeoff between model robustness and accuracy. Next, we empirically study the robustness of two prominent deep, nonlinear, flowbased generative models, namely GLOW and RealNVP. We design two types of adversarial attacks; one that minimizes the likelihood scores of indistribution samples, while the other that maximizes the likelihood scores of outofdistribution ones. We find that GLOW and RealNVP are extremely sensitive to both types of attacks. Finally, using a hybrid adversarial training procedure, we significantly boost the robustness of these generative models.
Knowledge Injection into Dialogue Generation via Language Models ; Dialogue generation has been successfully learned from scratch by neural networks, but tends to produce the same general response, e.g., what are you talking about, in many conversations. To reduce this homogeneity, external knowledge such as the speaker's profile and domain knowledge is applied as an additional condition to diversify a model's output. The required knowledge to develop an effective conversation, however, is not always available, which is different from prior work's assumption that a model always has acquired sufficient knowledge before chatting. This problem can be detrimental when applying a dialogue model like this chatting online with unconstrained people and topics, because the model does not have the needed knowledge. To address this problem, we propose InjK, which is a twostage approach to inject knowledge into a dialogue generation model. First, we train a largescale language model and query it as textual knowledge. Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response. Empirically, when a dialogue generation model can only access limited knowledge, our method outperforms prior work by producing more coherent and informative responses.
A General 3D SpaceTimeFrequency NonStationary THz Channel Model for 6G UltraMassive MIMO Wireless Communication Systems ; In this paper, a novel threedimensional 3D spacetimefrequency STF nonstationary geometrybased stochastic model GBSM is proposed for the sixth generation 6G terahertz THz wireless communication systems. The proposed THz channel model is very general having the capability to capture different channel characteristics in multiple THz application scenarios such as indoor scenarios, devicetodevice D2D communications, ultramassive multipleinput multipleoutput MIMO communications, and long traveling paths of users. Also, the generality of the proposed channel model is demonstrated by the fact that it can easily be reduced to different simplified channel models to fit specific scenarios by properly adjusting model parameters. The proposed general channel model takes into consideration the nonstationarities in space, time, and frequency domains caused by ultramassive MIMO, long traveling paths, and large bandwidths of THz communications, respectively. Statistical properties of the proposed general THz channel model are investigated. The accuracy and generality of the proposed channel model are verified by comparing the simulation results of the relative angle spread and root mean square RMS delay spread with corresponding channel measurements.
Learning an Adaptive Meta ModelGenerator for Incrementally Updating Recommender Systems ; Recommender Systems RSs in realworld applications often deal with billions of user interactions daily. To capture the most recent trends effectively, it is common to update the model incrementally using only the newly arrived data. However, this may impede the model's ability to retain longterm information due to the potential overfitting and forgetting issues. To address this problem, we propose a novel Adaptive Sequential Model Generation ASMG framework, which generates a better serving model from a sequence of historical models via a meta generator. For the design of the meta generator, we propose to employ Gated Recurrent Units GRUs to leverage its ability to capture the longterm dependencies. We further introduce some novel strategies to apply together with the GRU meta generator, which not only improve its computational efficiency but also enable more accurate sequential modeling. By instantiating the modelagnostic framework on a general deep learningbased RS model, we demonstrate that our method achieves stateoftheart performance on three public datasets and one industrial dataset.
Privacypreserving Generative Framework Against Membership Inference Attacks ; Artificial intelligence and machine learning have been integrated into all aspects of our lives and the privacy of personal data has attracted more and more attention. Since the generation of the model needs to extract the effective information of the training data, the model has the risk of leaking the privacy of the training data. Membership inference attacks can measure the model leakage of source data to a certain degree. In this paper, we design a privacypreserving generative framework against membership inference attacks, through the information extraction and data generation capabilities of the generative model variational autoencoder VAE to generate synthetic data that meets the needs of differential privacy. Instead of adding noise to the model output or tampering with the training process of the target model, we directly process the original data. We first map the source data to the latent space through the VAE model to get the latent code, then perform noise process satisfying metric privacy on the latent code, and finally use the VAE model to reconstruct the synthetic data. Our experimental evaluation demonstrates that the machine learning model trained with newly generated synthetic data can effectively resist membership inference attacks and still maintain high utility.
Controlling the Focus of Pretrained Language Generation Models ; The finetuning of pretrained transformerbased language generation models are typically conducted in an endtoend manner, where the model learns to attend to relevant parts of the input by itself. However, there does not exist a mechanism to directly control the model's focus. This work aims to develop a control mechanism by which a user can select spans of context as highlights for the model to focus on, and generate relevant output. To achieve this goal, we augment a pretrained model with trainable focus vectors that are directly applied to the model's embeddings, while the model itself is kept fixed. These vectors, trained on automatic annotations derived from attribution methods, act as indicators for context importance. We test our approach on two core generation tasks dialogue response generation and abstractive summarization. We also collect evaluation data where the highlightgeneration pairs are annotated by humans. Our experiments show that the trained focus vectors are effective in steering the model to generate outputs that are relevant to userselected highlights.
On Provable Copyright Protection for Generative Models ; There is a growing concern that learned conditional generative models may output samples that are substantially similar to some copyrighted data C that was in their training set. We give a formal definition of textitnear accessfreeness NAF and prove bounds on the probability that a model satisfying this definition outputs a sample similar to C, even if C is included in its training set. Roughly speaking, a generative model p is textitkNAF if for every potentially copyrighted data C, the output of p diverges by at most kbits from the output of a model q that textitdid not access C at all. We also give generative model learning algorithms, which efficiently modify the original generative model learning algorithm in a black box manner, that output generative models with strong bounds on the probability of sampling protected content. Furthermore, we provide promising experiments for both language transformers and image diffusion generative models, showing minimal degradation in output quality while ensuring strong protections against sampling protected content.
TrueTeacher Learning Factual Consistency Evaluation with Large Language Models ; Factual consistency evaluation is often conducted using Natural Language Inference NLI models, yet these models exhibit limited success in evaluating summaries. Previous work improved such models with synthetic training data. However, the data is typically based on perturbed humanwritten summaries, which often differ in their characteristics from real modelgenerated summaries and have limited coverage of possible factual errors. Alternatively, large language models LLMs have recently shown promising results in directly evaluating generative tasks, but are too computationally expensive for practical use. Motivated by these limitations, we introduce TrueTeacher, a method for generating synthetic data by annotating diverse modelgenerated summaries using a LLM. Unlike prior work, TrueTeacher does not rely on humanwritten summaries, and is multilingual by nature. Experiments on the TRUE benchmark show that a student model trained using our data, substantially outperforms both the stateoftheart model with similar capacity, and the LLM teacher. In a systematic study, we compare TrueTeacher to existing synthetic data generation methods and demonstrate its superiority and robustness to domainshift. Using the the mFACE dataset, we also show that our method generalizes to multilingual scenarios. Finally, we release a largescale synthetic dataset with 1.4M examples generated using TrueTeacher.
RefDiff Zeroshot Referring Image Segmentation with Generative Models ; Zeroshot referring image segmentation is a challenging task because it aims to find an instance segmentation mask based on the given referring descriptions, without training on this type of paired data. Current zeroshot methods mainly focus on using pretrained discriminative models e.g., CLIP. However, we have observed that generative models e.g., Stable Diffusion have potentially understood the relationships between various visual elements and text descriptions, which are rarely investigated in this task. In this work, we introduce a novel Referring Diffusional segmentor RefDiff for this task, which leverages the finegrained multimodal information from generative models. We demonstrate that without a proposal generator, a generative model alone can achieve comparable performance to existing SOTA weaklysupervised models. When we combine both generative and discriminative models, our RefDiff outperforms these competing methods by a significant margin. This indicates that generative models are also beneficial for this task and can complement discriminative models for better referring segmentation. Our code is publicly available at httpsgithub.comkodeniiRefDiff.
Understanding the Properties of Generated Corpora ; Models for text generation have become focal for many research tasks and especially for the generation of sentence corpora. However, understanding the properties of an automatically generated text corpus remains challenging. We propose a set of tools that examine the properties of generated text corpora. Applying these tools on various generated corpora allowed us to gain new insights into the properties of the generative models. As part of our characterization process, we found remarkable differences in the corpora generated by two leading generative technologies.
Generalized Metrical MultiTime Lagrange Model for General Relativity and Electromagnetism ; The paper construct a suitable generalized metrical multitime Lagrange geometrical model for both gravitational and electromagnetic fields, in a general setting. In this construction, the gravitational potentials are described by a distinguished vertical metrical tensor of the form halphabetae2sigmaphiij.
An example of a noncofibrantly generated model category ; We show that the model category of diagrams of spaces generated by a proper class of orbits is not cofibrantly generated. In particular the category of maps between spaces may be given a noncofibrantly generated model structure.
Generalized Whittaker functions for degenerate principal series of GL4,R ; We give a characterization of a generalized Whittaker model of a degenerate principal series representation of GLn,R as the kernel of some differential operators. By this characterization, we investigate some examples on GL4,R. We obtain the dimensions of the generalized Whittaker models and give their basis in terms of hypergeometric functions of one and two variables. We show the multiplicity one of the generalized Whittaker models by using the theory of hypergeometric functions.
A novel repetition normalized adversarial reward for headline generation ; While reinforcement learning can effectively improve language generation models, it often suffers from generating incoherent and repetitive phrases citepaulus2017deep. In this paper, we propose a novel repetition normalized adversarial reward to mitigate these problems. Our repetition penalized reward can greatly reduce the repetition rate and adversarial training mitigates generating incoherent phrases. Our model significantly outperforms the baseline model on ROUGE1,3.24, ROUGEL,2.25, and a decreased repetitionrate 4.98.
Are all cofibrantly generated model categories combinatorial ; G. Raptis has recently proved that, assuming Vopvenka's principle, every cofibrantly generated model category is Quillen equivalent to a combinatorial one. His result remains true for a slightly more general concept of a cofibrantly generated model category. We show that Vopvenka's principle is equivalent to this claim. The settheoretical status of the original Raptis' result is open.
General Manipulability Theorem for a Matching Model ; In a manytomany matching model in which agents' preferences satisfy substitutability and the law of aggregate demand, we proof the General Manipulability Theorem. We result generalizes the presented in Sotomayor 1996 and 2012 for the manytoone model. In addition, we show General Manipulability Theorem fail when agents' preferences satisfy only substitutability.
Melodyconditioned lyrics generation via finetuning language model and its evaluation with ChatGPT ; We leverage characterlevel language models for syllablelevel lyrics generation from symbolic melody. By finetuning a characterlevel pretrained model, we integrate language knowledge into the beam search of a syllablelevel Transformer generator. Using ChatGPTbased evaluations, we demonstrate enhanced coherence and correctness in the generated lyrics.
Generation High resolution 3D model from natural language by Generative Adversarial Network ; We present a method of generating high resolution 3D shapes from natural language descriptions. To achieve this goal, we propose two steps that generating low resolution shapes which roughly reflect texts and generating high resolution shapes which reflect the detail of texts. In a previous paper, the authors have shown a method of generating low resolution shapes. We improve it to generate 3D shapes more faithful to natural language and test the effectiveness of the method. To generate high resolution 3D shapes, we use the framework of Conditional Wasserstein GAN. We propose two roles of Critic separately, which calculate the Wasserstein distance between two probability distribution, so that we achieve generating high quality shapes or acceleration of learning speed of model. To evaluate our approach, we performed quantitive evaluation with several numerical metrics for Critic models. Our method is first to realize the generation of high quality model by propagating text embedding information to high resolution task when generating 3D model.
WriterForcing Generating more interesting story endings ; We study the problem of generating interesting endings for stories. Neural generative models have shown promising results for various text generation problems. Sequence to Sequence Seq2Seq models are typically trained to generate a single output sequence for a given input sequence. However, in the context of a story, multiple endings are possible. Seq2Seq models tend to ignore the context and generate generic and dull responses. Very few works have studied generating diverse and interesting story endings for a given story context. In this paper, we propose models which generate more diverse and interesting outputs by 1 training models to focus attention on important keyphrases of the story, and 2 promoting generation of nongeneric words. We show that the combination of the two leads to more diverse and interesting endings.
CatVRNN Generating Category Texts via Multitask Learning ; Controlling the model to generate texts of different categories is a challenging task that is receiving increasing attention. Recently, generative adversarial networks GANs have shown promising results for category text generation. However, the texts generated by GANs usually suffer from problems of mode collapse and training instability. To avoid the above problems, in this study, inspired by multitask learning, a novel model called categoryaware variational recurrent neural network CatVRNN is proposed. In this model, generation and classification tasks are trained simultaneously to generate texts of different categories. The use of multitask learning can improve the quality of the generated texts, when the classification task is appropriate. In addition, a function is proposed to initialize the hidden state of the CatVRNN to force the model to generate texts of a specific category. Experimental results on three datasets demonstrate that the model can outperform stateoftheart text generation methods based on GAN in terms of diversity of generated texts.
Longrange Prediction of Vital Signs Using Generative Boosting via LSTM Networks ; Vital signs including heart rate, respiratory rate, body temperature and blood pressure, are critical in the clinical decision making process. Effective early prediction of vital signs help to alert medical practitioner ahead of time and may prevent adverse health outcomes. In this paper, we suggest a new approach called generative boosting, in order to effectively perform early prediction of vital signs. Generative boosting consists of a generative model, to generate synthetic data for next few time steps, and several predictive models, to directly make longrange predictions based on observed and generated data. We explore generative boosting via long shortterm memory LSTM for both the predictive and generative models, leading to a scheme called generative LSTM GLSTM. Our experiments indicate that GLSTM outperforms a diverse range of strong benchmark models, with and without generative boosting. Finally, we use a mutual information based clustering algorithm to select a more representative dataset to train the generative model of GLSTM. This significantly improves the longrange predictive performance of high variation vital signs such as heart rate and systolic blood pressure.
SHADOWCAST Controllable Graph Generation ; We introduce the controllable graph generation problem, formulated as controlling graph attributes during the generative process to produce desired graphs with understandable structures. Using a transparent and straightforward Markov model to guide this generative process, practitioners can shape and understand the generated graphs. We propose rm Ssmall HADOWCsmall AST, a generative model capable of controlling graph generation while retaining the original graph's intrinsic properties. The proposed model is based on a conditional generative adversarial network. Given an observed graph and some userspecified Markov model parameters, rm Ssmall HADOWCsmall AST controls the conditions to generate desired graphs. Comprehensive experiments on three realworld network datasets demonstrate our model's competitive performance in the graph generation task. Furthermore, we show its effective controllability by directing rm Ssmall HADOWCsmall AST to generate hypothetical scenarios with different graph structures.
Asking Questions Like Educational Experts Automatically Generating QuestionAnswer Pairs on RealWorld Examination Data ; Generating high quality questionanswer pairs is a hard but meaningful task. Although previous works have achieved great results on answeraware question generation, it is difficult to apply them into practical application in the education field. This paper for the first time addresses the questionanswer pair generation task on the realworld examination data, and proposes a new unified framework on RACE. To capture the important information of the input passage we first automatically generaterather than extracting keyphrases, thus this task is reduced to keyphrasequestionanswer triplet joint generation. Accordingly, we propose a multiagent communication model to generate and optimize the question and keyphrases iteratively, and then apply the generated question and keyphrases to guide the generation of answers. To establish a solid benchmark, we build our model on the strong generative pretraining model. Experimental results show that our model makes great breakthroughs in the questionanswer pair generation task. Moreover, we make a comprehensive analysis on our model, suggesting new directions for this challenging task.
Recursive Decoding A Situated Cognition Approach to Compositional Generation in Grounded Language Understanding ; Compositional generalization is a troubling blind spot for neural language models. Recent efforts have presented techniques for improving a model's ability to encode novel combinations of known inputs, but less work has focused on generating novel combinations of known outputs. Here we focus on this latter decodeside form of generalization in the context of gSCAN, a synthetic benchmark for compositional generalization in grounded language understanding. We present Recursive Decoding RD, a novel procedure for training and using seq2seq models, targeted towards decodeside generalization. Rather than generating an entire output sequence in one pass, models are trained to predict one token at a time. Inputs i.e., the external gSCAN environment are then incrementally updated based on predicted tokens, and reencoded for the next decoder time step. RD thus decomposes a complex, outofdistribution sequence generation task into a series of incremental predictions that each resemble what the model has already seen during training. RD yields dramatic improvement on two previously neglected generalization tasks in gSCAN. We provide analyses to elucidate these gains over failure of a baseline, and then discuss implications for generalization in naturalistic grounded language understanding, and seq2seq more generally.
Controllable Text Generation for OpenDomain Creativity and Fairness ; Recent advances in large pretrained language models have demonstrated strong results in generating natural languages and significantly improved performances for many natural language generation NLG applications such as machine translation and text summarization. However, when the generation tasks are more openended and the content is underspecified, existing techniques struggle to generate longterm coherent and creative content. Moreover, the models exhibit and even amplify social biases that are learned from the training corpora. This happens because the generation models are trained to capture the surface patterns i.e. sequences of words, instead of capturing underlying semantics and discourse structures, as well as background knowledge including social norms. In this paper, I introduce our recent works on controllable text generation to enhance the creativity and fairness of language generation models. We explore hierarchical generation and constrained decoding, with applications to creative language generation including story, poetry, and figurative languages, and bias mitigation for generation models.
KDDLGAN Data Limited Image Generation via Knowledge Distillation ; Generative Adversarial Networks GANs rely heavily on largescale training data for training highquality image generation models. With limited training data, the GAN discriminator often suffers from severe overfitting which directly leads to degraded generation especially in generation diversity. Inspired by the recent advances in knowledge distillation KD, we propose KDDLGAN, a knowledgedistillation based generation framework that introduces pretrained visionlanguage models for training effective datalimited generation models. KDDLGAN consists of two innovative designs. The first is aggregated generative KD that mitigates the discriminator overfitting by challenging the discriminator with harder learning tasks and distilling more generalizable knowledge from the pretrained models. The second is correlated generative KD that improves the generation diversity by distilling and preserving the diverse imagetext correlation within the pretrained models. Extensive experiments over multiple benchmarks show that KDDLGAN achieves superior image generation with limited training data. In addition, KDDLGAN complements the stateoftheart with consistent and substantial performance gains.
GDVDM Generated Depth for better Diffusionbased Video Generation ; The field of generative models has recently witnessed significant progress, with diffusion models showing remarkable performance in image generation. In light of this success, there is a growing interest in exploring the application of diffusion models to other modalities. One such challenge is the generation of coherent videos of complex scenes, which poses several technical difficulties, such as capturing temporal dependencies and generating long, highresolution videos. This paper proposes GDVDM, a novel diffusion model for video generation, demonstrating promising results. GDVDM is based on a twophase generation process involving generating depth videos followed by a novel diffusion Vid2Vid model that generates a coherent realworld video. We evaluated GDVDM on the Cityscapes dataset and found that it generates more diverse and complex scenes compared to natural baselines, demonstrating the efficacy of our approach.
Trapezoidal Generalization over Linear Constraints ; We are developing a modelbased fuzzing framework that employs mathematical models of system behavior to guide the fuzzing process. Whereas traditional fuzzing frameworks generate tests randomly, a modelbased framework can deduce tests from a behavioral model using a constraint solver. Because the state space being explored by the fuzzer is often large, the rapid generation of test vectors is crucial. The need to generate tests quickly, however, is antithetical to the use of a constraint solver. Our solution to this problem is to use the constraint solver to generate an initial solution, to generalize that solution relative to the system model, and then to perform rapid, repeated, randomized sampling of the generalized solution space to generate fuzzing tests. Crucial to the success of this endeavor is a generalization procedure with reasonable size and performance costs that produces generalized solution spaces that can be sampled efficiently. This paper describes a generalization technique for logical formulae expressed in terms of Boolean combinations of linear constraints that meets the unique performance requirements of modelbased fuzzing. The technique represents generalizations using trapezoidal solution sets consisting of ordered, hierarchical conjunctions of linear constraints that are more expressive than simple intervals but are more efficient to manipulate and sample than generic polytopes. Supporting materials contain an ACL2 proof that verifies the correctness of a lowlevel implementation of the generalization algorithm against a specification of generalization correctness. Finally a postprocessing procedure is described that results in a restricted trapezoidal solution that can be sampled solved rapidly and efficiently without backtracking, even for integer domains. While informal correctness arguments are provided, a formal proof of the correctness of the restriction algorithm remains as future work.
ERNIEViLG Unified Generative Pretraining for Bidirectional VisionLanguage Generation ; Conventional methods for the imagetext generation tasks mainly tackle the naturally bidirectional generation tasks separately, focusing on designing taskspecific frameworks to improve the quality and fidelity of the generated samples. Recently, VisionLanguage Pretraining models have greatly improved the performance of the imagetotext generation tasks, but largescale pretraining models for texttoimage synthesis task are still underdeveloped. In this paper, we propose ERNIEViLG, a unified generative pretraining framework for bidirectional imagetext generation with transformer model. Based on the image quantization models, we formulate both image generation and text generation as autoregressive generative tasks conditioned on the textimage input. The bidirectional imagetext generative modeling eases the semantic alignments across vision and language. For the texttoimage generation process, we further propose an endtoend training method to jointly learn the visual sequence generator and the image reconstructor. To explore the landscape of largescale pretraining for bidirectional textimage generation, we train a 10billion parameter ERNIEViLG model on a largescale dataset of 145 million Chinese imagetext pairs which achieves stateoftheart performance for both texttoimage and imagetotext tasks, obtaining an FID of 7.9 on MSCOCO for texttoimage synthesis and best results on COCOCN and AICICC for image captioning.
The Good, the Bad, and the Missing Neural Code Generation for Machine Learning Tasks ; Machine learning ML has been increasingly used in a variety of domains, while solving ML programming tasks poses unique challenges because of the fundamentally different nature and construction from general programming tasks, especially for developers who do not have ML backgrounds. Automatic code generation that produces a code snippet from a natural language description can be a promising technique to accelerate ML programming tasks. In recent years, although many deep learningbased neural code generation models have been proposed with high accuracy, the fact that most of them are mainly evaluated on general programming tasks calls into question their effectiveness and usefulness in ML programming tasks. In this paper, we set out to investigate the effectiveness of existing neural code generation models on ML programming tasks. For our analysis, we select six stateoftheart neural code generation models, and evaluate their performance on four widely used ML libraries, with newlycreated 83K pairs of naturallanguage described ML programming tasks. Our empirical study reveals some good, bad, and missing aspects of neural code generation models on ML tasks, with a few major ones listed below. Good Neural code generation models perform significantly better on ML tasks than on nonML tasks. Bad Most of the generated code is semantically incorrect. Bad Code generation models cannot significantly improve developers' completion time. Good The generated code can help developers write more correct code by providing developers with clues for using correct APIs. Missing The observation from our user study reveals the missing aspects of code generation for ML tasks, e.g., decomposing code generation for divideandconquer into two tasks API sequence identification and API usage generation.
Bayesian filtering for multiobject systems with independently generated observations ; A general approach for Bayesian filtering of multiobject systems is studied, with particular emphasis on the model where each object generates observations independently of other objects. The approach is based on variational calculus applied to generating functionals, using the general version of Faa di Bruno's formula for Gateaux differentials. This result enables us to determine some general formulae for the updated generating functional after the application of a multiobject analogue of Bayes' rule.
Evaluating the Impact of Model Scale for Compositional Generalization in Semantic Parsing ; Despite their strong performance on many tasks, pretrained language models have been shown to struggle on outofdistribution compositional generalization. Meanwhile, recent work has shown considerable improvements on many NLP tasks from model scaling. Can scaling up model size also improve compositional generalization in semantic parsing We evaluate encoderdecoder models up to 11B parameters and decoderonly models up to 540B parameters, and compare model scaling curves for three different methods for applying a pretrained language model to a new task finetuning all parameters, prompt tuning, and incontext learning. We observe that finetuning generally has flat or negative scaling curves on outofdistribution compositional generalization in semantic parsing evaluations. Incontext learning has positive scaling curves, but is generally outperformed by much smaller finetuned models. Prompttuning can outperform finetuning, suggesting further potential improvements from scaling as it exhibits a more positive scaling curve. Additionally, we identify several error trends that vary with model scale. For example, larger models are generally better at modeling the syntax of the output space, but are also more prone to certain types of overfitting. Overall, our study highlights limitations of current techniques for effectively leveraging model scale for compositional generalization, while our analysis also suggests promising directions for future work.
Generating Gowdy cosmological models ; Using the analogy with stationary axisymmetric solutions, we present a method to generate new analytic cosmological solutions of Einstein's equation belonging to the class of T3 Gowdy cosmological models. We show that the solutions can be generated from their data at the initial singularity and present the formal general solution for arbitrary initial data. We exemplify the method by constructing the KantowskiSachs cosmological model and a generalization of it that corresponds to an unpolarized T3 Gowdy model.
Generalized Poisson sigma models ; A general master action in terms of superfields is given which generates generalized Poisson sigma models by means of a natural ghost number prescription. The simplest representation is the sigma model considered by Cattaneo and Felder. For Dirac brackets considerably more general models are generated.
Introspective Generative Modeling Decide Discriminatively ; We study unsupervised learning by developing introspective generative modeling IGM that attains a generator using progressively learned deep convolutional neural networks. The generator is itself a discriminator, capable of introspection being able to selfevaluate the difference between its generated samples and the given training data. When followed by repeated discriminative learning, desirable properties of modern discriminative classifiers are directly inherited by the generator. IGM learns a cascade of CNN classifiers using a synthesisbyclassification algorithm. In the experiments, we observe encouraging results on a number of applications including texture modeling, artistic style transferring, face modeling, and semisupervised learning.
Generator Reversal ; We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we propose instead to use more flexible code distributions. These distributions are estimated nonparametrically by reversing the generator map during training. The benefits include more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.
MultiTask Learning of Generation and Classification for EmotionAware Dialogue Response Generation ; For a computer to naturally interact with a human, it needs to be humanlike. In this paper, we propose a neural response generation model with multitask learning of generation and classification, focusing on emotion. Our model based on BART Lewis et al., 2020, a pretrained transformer encoderdecoder model, is trained to generate responses and recognize emotions simultaneously. Furthermore, we weight the losses for the tasks to control the update of parameters. Automatic evaluations and crowdsourced manual evaluations show that the proposed model makes generated responses more emotionally aware.
Deep Generative Modeling with Backward Stochastic Differential Equations ; This paper proposes a novel deep generative model, called BSDEGen, which combines the flexibility of backward stochastic differential equations BSDEs with the power of deep neural networks for generating highdimensional complex target data, particularly in the field of image generation. The incorporation of stochasticity and uncertainty in the generative modeling process makes BSDEGen an effective and natural approach for generating highdimensional data. The paper provides a theoretical framework for BSDEGen, describes its model architecture, presents the maximum mean discrepancy MMD loss function used for training, and reports experimental results.
A brief review of supersymmetric nonlinear sigma models and generalized complex geometry ; This is a review of the relation between supersymmetric nonlinear sigma models and target space geometry. In particular, we report on the derivation of generalized Kahler geometry from sigma models with additional spinorial superfields. Some of the results reviewed are Generalized complex geometry from sigma models in the Lagrangian formulation; Coordinatization of generalized Kahler geometry in terms of chiral, twisted chiral and semichiral superfields; Generalized Kahler geometry from sigma models in the Hamiltonian formulation.
Dynamics of homogeneous scalar fields with general selfinteraction potentials cosmological and gravitational collapse models ; The general relativistic dynamics of a wide class of selfinteracting, selfgravitating homogeneous scalar fields models is analyzed. The class is characterized by certain general conditions on the scalar field potential, which include both asymptotically polynomial and exponential behaviors. Within this class, we show that the generic evolution is always divergent in a finite time, and then make use of this result to construct cosmological models as well as radiating collapsing star models of the Vaidya type. It turns out that blackholes are generically formed in such models.
Multipleevent probability in generalrelativistic quantum mechanics a discrete model ; We introduce a simple quantum mechanical model in which time and space are discrete and periodic. These features avoid the complications related to continuousspectrum operators and infinitenorm states. The model provides a tool for discussing the probabilistic interpretation of generallycovariant quantum systems, without the confusion generated by spurious infinities. We use the model to illustrate the formalism of generalrelativistic quantum mechanics, and to test the definition of multipleevent probability introduced in a companion paper. We consider a version of the model with unitary timeevolution and a version without unitary timeevolution
Identifying Generalization Properties in Neural Networks ; While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian. We connect model generalization with the local property of a solution under the PACBayes paradigm. In particular, we prove that model generalization ability is related to the Hessian, the higherorder smoothness terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters. Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly.
Entropyregularized Optimal Transport Generative Models ; We investigate the use of entropyregularized optimal transport EOT cost in developing generative models to learn implicit distributions. Two generative models are proposed. One uses EOT cost directly in an oneshot optimization problem and the other uses EOT cost iteratively in an adversarial game. The proposed generative models show improved performance over contemporary models for image generation on MNSIT.
Divide and Generate Neural Generation of Complex Sentences ; We propose a task to generate a complex sentence from a simple sentence in order to amplify various kinds of responses in the database. We first divide a complex sentence into a main clause and a subordinate clause to learn a generator model of modifiers, and then use the model to generate a modifier clause to create a complex sentence from a simple sentence. We present an automatic evaluation metric to estimate the quality of the models and show that a pipeline model outperforms an endtoend model.
Flexible Prior Distributions for Deep Generative Models ; We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we argue that it might be advantageous to use more flexible code distributions. We demonstrate how these distributions can be induced directly from the data. The benefits include more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.
Learning the Base Distribution in Implicit Generative Models ; Popular generative model learning methods such as Generative Adversarial Networks GANs, and Variational Autoencoders VAE enforce the latent representation to follow simple distributions such as isotropic Gaussian. In this paper, we argue that learning a complicated distribution over the latent space of an autoencoder enables more accurate modeling of complicated data distributions. Based on this observation, we propose a two stage optimization procedure which maximizes an approximate implicit density model. We experimentally verify that our method outperforms GANs and VAEs on two image datasets MNIST, CELEBA. We also show that our approach is amenable to learning generative model for sequential data, by learning to generate speech and music.
Deformed general relativity and scalartensor models ; We calculate the most general action for a scalartensor model up to quadratic order in derivatives with deformed general covariance and nonminimal coupling. We demonstrate how different choices of the free functions recover specific well known scalartensor models. We look at the cosmological dynamics and find the general conditions for either inflation or a big bounce. Using this we present a novel nonminimally coupled scalar model which produces a bounce, and describe how to find similar models.
Latent Variable Modeling for Generative Concept Representations and Deep Generative Models ; Latent representations are the essence of deep generative models and determine their usefulness and power. For latent representations to be useful as generative concept representations, their latent space must support latent space interpolation, attribute vectors and concept vectors, among other things. We investigate and discuss latent variable modeling, including latent variable models, latent representations and latent spaces, particularly hierarchical latent representations and latent space vectors and geometry. Our focus is on that used in variational autoencoders and generative adversarial networks.
GoalEmbedded Dual Hierarchical Model for TaskOriented Dialogue Generation ; Hierarchical neural networks are often used to model inherent structures within dialogues. For goaloriented dialogues, these models miss a mechanism adhering to the goals and neglect the distinct conversational patterns between two interlocutors. In this work, we propose GoalEmbedded Dual Hierarchical Attentional EncoderDecoder GDuHA able to center around goals and capture interlocutorlevel disparity while modeling goaloriented dialogues. Experiments on dialogue generation, response generation, and human evaluations demonstrate that the proposed model successfully generates higherquality, more diverse and goalcentric dialogues. Moreover, we apply data augmentation via goaloriented dialogue generation for taskoriented dialog systems with better performance achieved.
Sequence Modeling with Unconstrained Generation Order ; The dominant approach to sequence generation is to produce a sequence in some predefined order, e.g. left to right. In contrast, we propose a more general model that can generate the output sequence by inserting tokens in any arbitrary order. Our model learns decoding order as a result of its training procedure. Our experiments show that this model is superior to fixed order models on a number of sequence generation tasks, such as Machine Translation, ImagetoLaTeX and Image Captioning.
Statistical guarantees for generative models without domination ; In this paper, we introduce a convenient framework for studying adversarial generative models from a statistical perspective. It consists in modeling the generative device as a smooth transformation of the unit hypercube of a dimension that is much smaller than that of the ambient space and measuring the quality of the generative model by means of an integral probability metric. In the particular case of integral probability metric defined through a smoothness class, we establish a risk bound quantifying the role of various parameters. In particular, it clearly shows the impact of dimension reduction on the error of the generative model.
Deep Denerative Models for Drug Design and Response ; Designing new chemical compounds with desired pharmaceutical properties is a challenging task and takes years of development and testing. Still, a majority of new drugs fail to prove efficient. Recent success of deep generative modeling holds promises of generation and optimization of new molecules. In this review paper, we provide an overview of the current generative models, and describe necessary biological and chemical terminology, including molecular representations needed to understand the field of drug design and drug response. We present commonly used chemical and biological databases, and tools for generative modeling. Finally, we summarize the current state of generative modeling for drug design and drug response prediction, highlighting the stateofart approaches and limitations the field is currently facing.
Conformal Generative Modeling on Triangulated Surfaces ; We propose conformal generative modeling, a framework for generative modeling on 2D surfaces approximated by discrete triangle meshes. Our approach leverages advances in discrete conformal geometry to develop a map from a source triangle mesh to a target triangle mesh of a simple manifold such as a sphere. After accounting for errors due to the mesh discretization, we can use any generative modeling approach developed for simple manifolds as a plugandplay subroutine. We demonstrate our framework on multiple complicated manifolds and multiple generative modeling subroutines, where we show that our approach can learn good estimates of distributions on meshes from samples, and can also learn simultaneously from multiple distinct meshes of the same underlying manifold.
Generalized generalized linear models Convex estimation and online bounds ; We introduce a new computational framework for estimating parameters in generalized generalized linear models GGLM, a class of models that extends the popular generalized linear models GLM to account for dependencies among observations in spatiotemporal data. The proposed approach uses a monotone operatorbased variational inequality method to overcome nonconvexity in parameter estimation and provide guarantees for parameter recovery. The results can be applied to GLM and GGLM, focusing on spatiotemporal models. We also present online instancebased bounds using martingale concentrations inequalities. Finally, we demonstrate the performance of the algorithm using numerical simulations and a real data example for wildfire incidents.
Generalization of nonlinear Murnaghan elastic model for viscoelastic materials ; This paper presents a generalization of Murnaghan elastic material to viscoelastic behavior using the GreenRivlin multipleintegral approach. In the linear limit, the model coincides with the generalized Maxwell model. To create a nonlinear generalization, all possible secondorder corrections were included in the constitutive equations written in the internal strains representation. Using this approach, we obtained expressions for the time and frequencydependent nonlinear dynamic moduli. We applied the developed nonlinear viscoelastic model to the description of infinitesimal strain waves superposed on finite prestrain. Furthermore, we considered the generation of higher harmonic by the nonlinear interaction of two strain waves, which we showed can provide a method to measure all viscoelastic constants of the developed model.
One generalization of the Dicketype models ; We discuss one family of possible generalizations of the JaynesCummings and the TavisCummings models using the technique of algebraic Bethe ansatz related to the Gaudintype models. In particular, we present a family of generically nonHermitian Hamiltonians that generalize paradigmatic quantumoptical models. Further directions of our research include studying physical properties of the obtained generalized models.
Finiteness of a spinfoam model for euclidean quantum general relativity ; We prove that a certain spinfoam model for euclidean quantum general relativity, recently defined, is finite all its all Feynman diagrams converge. The model is a variant of the BarrettCrane model, and is defined in terms of a field theory over SO4 X SO4 X SO4 X SO4.
Some properties of metastable supersymmetrybreaking vacua in WessZumino models ; As a contribution to the current efforts to understand supersymmetrybreaking by metastable vacua, we study general properties of supersymmetrybreaking vacua in WessZumino models we show that treelevel degeneracy is generic, explore some constraints on the couplings and present a simple model with a longlived metastable vacuum, ending with some generalizations to nonrenormalizable models.
Results on modelling and products of singularities in Colombeau algebra GR ; Modelling of singularities given by discontinuous functions or distributions by means of generalized functions has proved useful in many problems posed by physical phenomena. We introduce in a systematic way generalized functions of Colombeau that model such singularities. Moreover, we evaluate some products of singularitymodelling generalized functions whenever the result admits an associated distribution
On Generalized Rectangular Fuzzy Model for Assessment ; The article is dedicated to the analysis of the existing models for assessment based of the fuzzy logic centroid technique. A new Generalized Rectangular Model were developed. Some generalizations of the existing models are offered.
Synthetic Language Generation and Model Validation in BEAST2 ; Generating synthetic languages aids in the testing and validation of future computational linguistic models and methods. This thesis extends the BEAST2 phylogenetic framework to add linguistic sequence generation under multiple models. The new plugin is then used to test the effects of the phenomena of word borrowing on the inference process under two widely used phylolinguistic models.
Supersymmetric Extension of Preonic Models General Remarks ; We present some general remarks on supersymmetric extensions of fermionscalar and threefermion preonic models with an assumption of supersymmetry is realized at preonic level. The motivation and the requirement of this assumption are briefly given. In general, supersymmetric extensions of fermionscalar threefermion models predict 4 10 super partners for each Standard Model fermion.
Interpreting canonical tensor model in minisuperspace ; Canonical tensor model is a theory of dynamical fuzzy spaces in arbitrary spacetime dimensions. Examining its simplest case, we find a connection to a minisuperspace model of general relativity in arbitrary dimensions. This is a first step in interpreting variables in canonical tensor model based on the known language of general relativity.
Critical trees counterexamples in model checking of CSM systems using CBS algorithm ; The important feature of temporal model checking is the generation of counterexamples. In the report, the requirements for generation of counterexample called critical tree in model checking of CSM systems are described. The output of TempoRG model checker for QsCTL logic a version of CTL is presented. A contradiction between counterexample generation and state space reduction is commented.
Generalized operatorscaling random ball model ; This article introduces the operatorscaling random ball model, generalizing the isotropic random ball models investigated recently in the literature to anisotropic setup. The model is introduced as a generalized random field and results on weak convergence are established in the space of tempered distributions.
Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models ; Automated evaluation of open domain natural language generation NLG models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by finetuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the modellevel quality of NLG models with samplelevel comparison results with skill rating system. While able to be trained in a fully selfsupervised fashion, our model can be further finetuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and earlystopping. We evaluate our approach on both story generation and chitchat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model.
Neural Network based Explicit Mixture Models and Expectationmaximization based Learning ; We propose two neural network based mixture models in this article. The proposed mixture models are explicit in nature. The explicit models have analytical forms with the advantages of computing likelihood and efficiency of generating samples. Computation of likelihood is an important aspect of our models. Expectationmaximization based algorithms are developed for learning parameters of the proposed models. We provide sufficient conditions to realize the expectationmaximization based learning. The main requirements are invertibility of neural networks that are used as generators and Jacobian computation of functional form of the neural networks. The requirements are practically realized using a flowbased neural network. In our first mixture model, we use multiple flowbased neural networks as generators. Naturally the model is complex. A single latent variable is used as the common input to all the neural networks. The second mixture model uses a single flowbased neural network as a generator to reduce complexity. The single generator has a latent variable input that follows a Gaussian mixture distribution. We demonstrate efficiency of proposed mixture models through extensive experiments for generating samples and maximum likelihood based classification.
Learning Generative Models with Visual Attention ; Attention has long been proposed by psychologists as important for effectively dealing with the enormous sensory stimulus available in the neocortex. Inspired by the visual attention models in computational neuroscience and the need of objectcentric data for generative models, we describe for generative learning framework using attentional mechanisms. Attentional mechanisms can propagate signals from region of interest in a scene to an aligned canonical representation, where generative modeling takes place. By ignoring background clutter, generative models can concentrate their resources on the object of interest. Our model is a proper graphical model where the 2D Similarity transformation is a part of the topdown process. A ConvNet is employed to provide good initializations during posterior inference which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our model can robustly attend to face regions of novel test subjects. More importantly, our model can learn generative models of new faces from a novel dataset of large images where the face locations are not known.
Empirical Models for the Realistic Generation of Cooperative Awareness Messages in Vehicular Networks ; Most V2X VehicletoEverything applications rely on broadcasting awareness messages known as CAM Cooperative Awareness Messages in ETSI or BSM Basic Safety Message in SAE standards. A large number of studies have been devoted to guarantee their reliable transmission. However, to date, the studies are generally based on simplified data traffic models that generate awareness messages at periodic intervals or with a constant message size. These models do not accurately represent the real generation of CAM messages that follow specific mobilitybased rules. Using simplified and unrealistic traffic models can significantly impact the results and validity of the studies, and hence accurate models for the generation of awareness messages are necessary. This paper proposes the first set of models that can realistically generate CAM messages. The models have been created from real traces collected by two car manufacturers in urban, suburban and highway test drives. The models are based on mth order Markov sources, and model the size of CAMs and the time interval between CAMs. The models are openly provided to the community and can be easily integrated into any simulator.
Facilitating automated conversion of scientific knowledge into scientific simulation models with the Machine Assisted Generation, Calibration, and Comparison MAGCC Framework ; The Machine Assisted Generation, Comparison, and Calibration MAGCC framework provides machine assistance and automation of recurrent crucial steps and processes in the development, implementation, testing, and use of scientific simulation models. MAGCC bridges systems for knowledge extraction via natural language processing or extracted from existing mathematical models and provides a comprehensive workflow encompassing the composition of scientific models and artificial intelligence AI assisted code generation. MAGCC accomplishes this through 1 the development of a comprehensively expressive formal knowledge representation knowledgebase, the Structured Scientific Knowledge Representation SSKR that encompasses all the types of information needed to make any simulation model, 2 the use of an artificially intelligent logic reasoning system, the Computational Modeling Assistant CMA, that takes information from the SSKR and generates, in a traceable fashion, model specifications across a range of simulation modeling methods, and 3 the use of the CMA to generate executable code for a simulation model from those model specifications. The MAGCC framework can be customized any scientific domain, and future work will integrate newly developed codegenerating AI systems.
Discriminative Models Can Still Outperform Generative Models in Aspect Based Sentiment Analysis ; Aspectbased Sentiment Analysis ABSA helps to explain customers' opinions towards products and services. In the past, ABSA models were discriminative, but more recently generative models have been used to generate aspects and polarities directly from text. In contrast, discriminative models commonly first select aspects from the text, and then classify the aspect's polarity. Previous results showed that generative models outperform discriminative models on several English ABSA datasets. Here, we evaluate and contrast two stateoftheart discriminative and generative models in several settings crosslingual, crossdomain, and crosslingual and domain, to understand generalizability in settings other than English monolingual indomain. Our more thorough evaluation shows that, contrary to previous studies, discriminative models can still outperform generative models in almost all settings.
Which Kind Is Better in Opendomain Multiturn Dialog,Hierarchical or Nonhierarchical Models An Empirical Study ; Currently, opendomain generative dialog systems have attracted considerable attention in academia and industry. Despite the success of singleturn dialog generation, multiturn dialog generation is still a big challenge. So far, there are two kinds of models for opendomain multiturn dialog generation hierarchical and nonhierarchical models. Recently, some works have shown that the hierarchical models are better than nonhierarchical models under their experimental settings; meanwhile, some works also demonstrate the opposite conclusion. Due to the lack of adequate comparisons, it's not clear which kind of models are better in opendomain multiturn dialog generation. Thus, in this paper, we will measure systematically nearly all representative hierarchical and nonhierarchical models over the same experimental settings to check which kind is better. Through extensive experiments, we have the following three important conclusions 1 Nearly all hierarchical models are worse than nonhierarchical models in opendomain multiturn dialog generation, except for the HRAN model. Through further analysis, the excellent performance of HRAN mainly depends on its wordlevel attention mechanism; 2 The performance of other hierarchical models will also obtain a great improvement if integrating the wordlevel attention mechanism into these models. The modified hierarchical models even significantly outperform the nonhierarchical models; 3 The reason why the wordlevel attention mechanism is so powerful for hierarchical models is because it can leverage context information more effectively, especially the finegrained information. Besides, we have implemented all of the models and already released the codes.
Dynamical Systems Analysis of Various Dark Energy Models ; In this thesis, we used dynamical systems analysis to find the qualitative behaviour of some dark energy models. Specifically, dynamical systems analysis of quintessence scalar field models, chameleon scalar field models and holographic models of dark energy are discussed in this thesis.
Study of Thermodynamics in Generalized Holographic and Ricci Dark Energy Models ; We have considered the flat FRW model of the universe which is filled with the combination of dark matter and dark energy. Here we have considered two types of dark energy models i Generalized holographic and ii generalized Ricci dark energies. The general descriptions of first law and generalized second law GSL of thermodynamics have studied on the apparent horizon, particle horizon and event horizon of the universe. We have shown that the first law and GSL are always valid on apparent horizon and first law can not be satisfied on the particle and event horizons in Einstein's gravity. These results are always true for any types of dark energy models i.e., these results do not depend on the dark energy models in Einstein's gravity. But the GSL completely depends on the choices of dark energy models in Einstein's gravity. Here we have discussed the validity of GSL in Generalized holographic and generalized Ricci dark energy models. On the particle horizon GSL may be satisfied but on the event horizon the GSL can not be satisfied for both the dark energy models. Also we have considered the Generalized holographic dark energy and generalized Ricci dark energy as the original holographic dark energy, so in this situation we have calculated the expression of the radius of the horizon L. On this horizon, we have shown that the first law can not be satisfied. Finally, on the horizon of radius L, we have found that the GSL can not be satisfied for both the dark energy models.
MuseGAN Multitrack Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment ; Generating music has a few notable differences from generating images and videos. First, music is an art of time, necessitating a temporal model. Second, music is usually composed of multiple instrumentstracks with their own temporal dynamics, but collectively they unfold over time interdependently. Lastly, musical notes are often grouped into chords, arpeggios or melodies in polyphonic music, and thereby introducing a chronological ordering of notes is not naturally suitable. In this paper, we propose three models for symbolic multitrack music generation under the framework of generative adversarial networks GANs. The three models, which differ in the underlying assumptions and accordingly the network architectures, are referred to as the jamming model, the composer model and the hybrid model. We trained the proposed models on a dataset of over one hundred thousand bars of rock music and applied them to generate pianorolls of five tracks bass, drums, guitar, piano and strings. A few intratrack and intertrack objective metrics are also proposed to evaluate the generative results, in addition to a subjective user study. We show that our models can generate coherent music of four bars right from scratch i.e. without human inputs. We also extend our models to humanAI cooperative music generation given a specific track composed by human, we can generate four additional tracks to accompany it. All code, the dataset and the rendered audio samples are available at httpssalu133445.github.iomusegan .
Generalization in Automated Process Discovery A Framework based on Event Log Patterns ; The importance of quality measures in process mining has increased. One of the key quality aspects, generalization, is concerned with measuring the degree of overfitting of a process model w.r.t. an event log, since the recorded behavior is just an example of the true behavior of the underlying business process. Existing generalization measures exhibit several shortcomings that severely hinder their applicability in practice. For example, they assume the event log fully fits the discovered process model, and cannot deal with large reallife event logs and complex process models. More significantly, current measures neglect generalizations for clear patterns that demand a certain construct in the model. For example, a repeating sequence in an event log should be generalized with a loop structure in the model. We address these shortcomings by proposing a framework of measures that generalize a set of patterns discovered from an event log with representative traces and check the corresponding controlflow structures in the process model via their trace alignment. We instantiate the framework with a generalization measure that uses tandem repeats to identify repetitive patterns that are compared to the loop structures and a concurrency oracle to identify concurrent patterns that are compared to the parallel structures of the process model. In an extensive qualitative and quantitative evaluation using 74 logmodel pairs using against two baseline generalization measures, we show that the proposed generalization measure consistently ranks process models that fulfil the observed patterns with generalizing controlflow structures higher than those which do not, while the baseline measures disregard those patterns. Further, we show that our measure can be efficiently computed for datasets two orders of magnitude larger than the largest dataset the baseline generalization measures can handle.
Incorporating Domain Knowledge through Task Augmentation for FrontEnd JavaScript Code Generation ; Code generation aims to generate a code snippet automatically from natural language descriptions. Generally, the mainstream code generation methods rely on a large amount of paired training data, including both the natural language description and the code. However, in some domainspecific scenarios, building such a large paired corpus for code generation is difficult because there is no directly available pairing data, and a lot of effort is required to manually write the code descriptions to construct a highquality training dataset. Due to the limited training data, the generation model cannot be well trained and is likely to be overfitting, making the model's performance unsatisfactory for realworld use. To this end, in this paper, we propose a task augmentation method that incorporates domain knowledge into code generation models through auxiliary tasks and a SubtokenTranX model by extending the original TranX model to support subtokenlevel code generation. To verify our proposed approach, we collect a realworld code generation dataset and conduct experiments on it. Our experimental results demonstrate that the subtokenlevel TranX model outperforms the original TranX model and the Transformer model on our dataset, and the exact match accuracy of SubtokenTranX improves significantly by 12.75 with the help of our task augmentation method. The model performance on several code categories has satisfied the requirements for application in industrial systems. Our proposed approach has been adopted by Alibaba's BizCook platform. To the best of our knowledge, this is the first domain code generation system adopted in industrial development environments.
Prompt Generate Train PGT Fewshot Domain Adaption of Retrieval Augmented Generation Models for Open Book QuestionAnswering ; We propose a framework Prompt, Generate, Train PGT to efficiently develop a generative questionanswering model for openbook questionanswering over a proprietary collection of text documents. The framework adapts a retriever augmented generation RAG model to the target domain using supervised finetuning and reinforcement learning with synthetic feedback in a fewshot setting. This, we hypothesize, will yield an aligned, uncertainty calibrated model that is competitive with GPT4 based incontext retrieval augmented generation in generating relevant answers at lower serving costs. The framework's synthetic generation pipeline will generate synthetic training data comprising passage, question, answer tuples using an opensource LLM and a novel consistency filtering scheme. The pipeline will be designed to generate both abstractive and extractive questions that span the entire corpus. The framework proposes to finetune a smaller RAG model comprising a dense retriever ColBERTv2 and a smaller sized LLM on the synthetic dataset. In parallel, the framework will train a Reward model to score domain grounded answers higher than hallucinated answers using an a priori relevance ordering of synthetically assembled samples. In the next phase, the framework will align the RAG model with the target domain using reinforcement learning Proximal Policy Optimization. This step may improve the RAG model's ability to generate grounded answers and ignore out of domain questions. In the final phase, the framework will calibrate the model's uncertainty for extractive questionanswers.
A Bayesian Nonparametric Approach to Generative Models Integrating Variational Autoencoder and Generative Adversarial Networks using Wasserstein and Maximum Mean Discrepancy ; Generative models have emerged as a promising technique for producing highquality images that are indistinguishable from real images. Generative adversarial networks GANs and variational autoencoders VAEs are two of the most prominent and widely studied generative models. GANs have demonstrated excellent performance in generating sharp realistic images and VAEs have shown strong abilities to generate diverse images. However, GANs suffer from ignoring a large portion of the possible output space which does not represent the full diversity of the target distribution, and VAEs tend to produce blurry images. To fully capitalize on the strengths of both models while mitigating their weaknesses, we employ a Bayesian nonparametric BNP approach to merge GANs and VAEs. Our procedure incorporates both Wasserstein and maximum mean discrepancy MMD measures in the loss function to enable effective learning of the latent space and generate diverse and highquality samples. By fusing the discriminative power of GANs with the reconstruction capabilities of VAEs, our novel model achieves superior performance in various generative tasks, such as anomaly detection and data augmentation. Furthermore, we enhance the model's capability by employing an extra generator in the code space, which enables us to explore areas of the code space that the VAE might have overlooked. With a BNP perspective, we can model the data distribution using an infinitedimensional space, which provides greater flexibility in the model and reduces the risk of overfitting. By utilizing this framework, we can enhance the performance of both GANs and VAEs to create a more robust generative model suitable for various applications.
Minisuperspace Approach of Generalized Gravitational Models ; Motivated by the dark energy issue, the minisuperspace approach for general relativistic cosmological theories is outlined.
Efficient Heuristic Generation for Robot Path Planning with Recurrent Generative Model ; Robot path planning is difficult to solve due to the contradiction between optimality of results and complexity of algorithms, even in 2D environments. To find an optimal path, the algorithm needs to search all the state space, which costs a lot of computation resource. To address this issue, we present a novel recurrent generative model RGM which generates efficient heuristic to reduce the search efforts of path planning algorithm. This RGM model adopts the framework of general generative adversarial networks GAN, which consists of a novel generator that can generate heuristic by refining the outputs recurrently and two discriminators that check the connectivity and safety properties of heuristic. We test the proposed RGM module in various 2D environments to demonstrate its effectiveness and efficiency. The results show that the RGM successfully generates appropriate heuristic in both seen and new unseen maps with a high accuracy, demonstrating the good generalization ability of this model. We also compare the rapidlyexploring random tree star RRT with generated heuristic and the conventional RRT in four different maps, showing that the generated heuristic can guide the algorithm to find both initial and optimal solution in a faster and more efficient way.
Deep Generative Models for Galaxy Image Simulations ; Image simulations are essential tools for preparing and validating the analysis of current and future widefield optical surveys. However, the galaxy models used as the basis for these simulations are typically limited to simple parametric light profiles, or use a fairly limited amount of available spacebased data. In this work, we propose a methodology based on Deep Generative Models to create complex models of galaxy morphologies that may meet the image simulation needs of upcoming surveys. We address the technical challenges associated with learning this morphology model from noisy and PSFconvolved images by building a hybrid Deep Learningphysical Bayesian hierarchical model for observed images, explicitly accounting for the Point Spread Function and noise properties. The generative model is further made conditional on physical galaxy parameters, to allow for sampling new light profiles from specific galaxy populations. We demonstrate our ability to train and sample from such a model on galaxy postage stamps from the HSTACS COSMOS survey, and validate the quality of the model using a range of second and higherorder morphology statistics. Using this set of statistics, we demonstrate significantly more realistic morphologies using these deep generative models compared to conventional parametric models. To help make these generative models practical tools for the community, we introduce GalSimHub, a communitydriven repository of generative models, and a framework for incorporating generative models within the GalSim image simulation software.
FRW Universe Models in Conformally Flat Spacetime Coordinates. I General Formalism ; The 3space of a universe model is defined at a certain simultaneity. Hence space depends on which time is used. We find a general formula generating all known and also some new transformations to conformally flat spacetime coordinates. A general formula for the recession velocity is deduced.