text
stringlengths
62
2.94k
On Finitely Generated Models of Theories with at Most Countably Many Nonisomorphic Finitely Generated Models ; We study finitely generated models of countable theories, having at most countably many nonisomorphic finitely generated models. We intro duce a notion of rank of finitely generated models and we prove, when T has at most countably many nonisomorphic finitely generated models, that every finitely generated model has an ordinal rank. This rank is used to give a prop erty of finitely generated models analogue to the Hopf property of groups and also to give a necessary and sufficient condition for a finitely generated model to be prime of its complete theory. We investigate some properties of limit groups of equationally noetherian groups, in respect to their ranks.
Generalized modeling of ecological population dynamics ; Over the past years several authors have used the approach of generalized modeling to study the dynamics of food chains and food webs. Generalized models come close to the efficiency of random matrix models, while being as directly interpretable as conventional differentialequationbased models. Here we present a pedagogical introduction to the approach of generalized modeling. This introduction places more emphasis on the underlying concepts of generalized modeling than previous publications. Moreover, we propose a shortcut that can significantly accelerate the formulation of generalized models and introduce an iterative procedure that can be used to refine existing generalized models by integrating new biological insights.
Generating Subsurface Earth Models using Discrete Representation Learning and Deep Autoregressive Network ; Subsurface earth models referred to as geomodels are crucial for characterizing complex subsurface systems. Multiplepoint statistics are commonly used to generate geomodels. In this paper, a deeplearningbased generative method is developed as an alternative to the traditional Geomodel generation procedure. The generative method comprises two deeplearning models, namely the hierarchical vectorquantized variational autoencoder VQVAE2 and PixelSNAIL autoregressive model. Based on the principle of neural discrete representation learning, the VQVAE2 learns to massively compress the Geomodels to extract the lowdimensional, discrete latent representation corresponding to each Geomodel. Following that, PixelSNAIL uses the deep autoregressive network to learn the prior distribution of the latent codes. For the purpose of Geomodel generation, PixelSNAIL samples from the newly learned prior distribution of latent codes, and then the decoder of the VQVAE2 converts the newly sampled latent code to a newly constructed geomodel. PixelSNAIL can be used for unconditional or conditional geomodel generation. In an unconditional generation, the generative workflow generates an ensemble of geomodels without any constraint. On the other hand, in the conditional geomodel generation, the generative workflow generates an ensemble of geomodels similar to a userdefined source image, which ultimately facilitates the control and manipulation of the generated geomodels. To better construct the fluvial channels in the geomodels, the perceptual loss is implemented in the VQVAE2 model instead of the traditional mean squared error loss. At a specific compression ratio, the quality of multiattribute geomodel generation is better than that of singleattribute geomodel generation.
On the Generalization of Diffusion Model ; The diffusion probabilistic generative models are widely used to generate highquality data. Though they can synthetic data that does not exist in the training set, the rationale behind such generalization is still unexplored. In this paper, we formally define the generalization of the generative model, which is measured by the mutual information between the generated data and the training set. The definition originates from the intuition that the model which generates data with less correlation to the training set exhibits better generalization ability. Meanwhile, we show that for the empirical optimal diffusion model, the data generated by a deterministic sampler are all highly related to the training set, thus poor generalization. This result contradicts the observation of the trained diffusion model's approximating empirical optima extrapolation ability generating unseen data. To understand this contradiction, we empirically verify the difference between the sufficiently trained diffusion model and the empirical optima. We found, though obtained through sufficient training, there still exists a slight difference between them, which is critical to making the diffusion model generalizable. Moreover, we propose another training objective whose empirical optimal solution has no potential generalization problem. We empirically show that the proposed training objective returns a similar model to the original one, which further verifies the generalization ability of the trained diffusion model.
DEFAKE Detection and Attribution of Fake Images Generated by TexttoImage Generation Models ; Texttoimage generation models that generate images based on prompt descriptions have attracted an increasing amount of attention during the past few months. Despite their encouraging performance, these models raise concerns about the misuse of their generated fake images. To tackle this problem, we pioneer a systematic study on the detection and attribution of fake images generated by texttoimage generation models. Concretely, we first build a machine learning classifier to detect the fake images generated by various texttoimage generation models. We then attribute these fake images to their source models, such that model owners can be held responsible for their models' misuse. We further investigate how prompts that generate fake images affect detection and attribution. We conduct extensive experiments on four popular texttoimage generation models, including DALLcdotE 2, Stable Diffusion, GLIDE, and Latent Diffusion, and two benchmark promptimage datasets. Empirical results show that 1 fake images generated by various models can be distinguished from real ones, as there exists a common artifact shared by fake images from different models; 2 fake images can be effectively attributed to their source models, as different models leave unique fingerprints in their generated images; 3 prompts with the person'' topic or a length between 25 and 75 enable models to generate fake images with higher authenticity. All findings contribute to the community's insight into the threats caused by texttoimage generation models. We appeal to the community's consideration of the counterpart solutions, like ours, against the rapidlyevolving fake image generation.
Discovering Graph Generation Algorithms ; We provide a novel approach to construct generative models for graphs. Instead of using the traditional probabilistic models or deep generative models, we propose to instead find an algorithm that generates the data. We achieve this using evolutionary search and a powerful fitness function, implemented by a randomly initialized graph neural network. This brings certain advantages over current deep generative models, for instance, a higher potential for outoftrainingdistribution generalization and direct interpretability, as the final graph generative process is expressed as a Python function. We show that this approach can be competitive with deep generative models and under some circumstances can even find the true graph generative process, and as such perfectly generalize.
GenPhys From Physical Processes to Generative Models ; Since diffusion models DM and the more recent Poisson flow generative models PFGM are inspired by physical processes, it is reasonable to ask Can physical processes offer additional new generative models We show that the answer is yes. We introduce a general family, Generative Models from Physical Processes GenPhys, where we translate partial differential equations PDEs describing physical processes to generative models. We show that generative models can be constructed from sgenerative PDEs s for smooth. GenPhys subsume the two existing generative models DM and PFGM and even give rise to new families of generative models, e.g., Yukawa Generative Models inspired from weak interactions. On the other hand, some physical processes by default do not belong to the GenPhys family, e.g., the wave equation and the Schrodinger equation, but could be made into the GenPhys family with some modifications. Our goal with GenPhys is to explore and expand the design space of generative models.
A generative model for molecule generation based on chemical reaction trees ; Deep generative models have been shown powerful in generating novel molecules with desired chemical properties via their representations such as strings, trees or graphs. However, these models are limited in recommending synthetic routes for the generated molecules in practice. We propose a generative model to generate molecules via multistep chemical reaction trees. Specifically, our model first propose a chemical reaction tree with predicted reaction templates and commercially available molecules starting molecules, and then perform forward synthetic steps to obtain product molecules. Experiments show that our model can generate chemical reactions whose product molecules are with desired chemical properties. Also, the complete synthetic routes for these product molecules are provided.
Diffusion Models for Nonautoregressive Text Generation A Survey ; Nonautoregressive NAR text generation has attracted much attention in the field of natural language processing, which greatly reduces the inference latency but has to sacrifice the generation accuracy. Recently, diffusion models, a class of latent variable generative models, have been introduced into NAR text generation, showing an improved text generation quality. In this survey, we review the recent progress in diffusion models for NAR text generation. As the background, we first present the general definition of diffusion models and the text diffusion models, and then discuss their merits for NAR generation. As the core content, we further introduce two mainstream diffusion models in existing work of text diffusion, and review the key designs of the diffusion process. Moreover, we discuss the utilization of pretrained language models PLMs for text diffusion models and introduce optimization techniques for text data. Finally, we discuss several promising directions and conclude this paper. Our survey aims to provide researchers with a systematic reference of related research on text diffusion models for NAR generation. We present our collection of text diffusion models at httpsgithub.comRUCAIBoxAwesomeTextDiffusionModels.
ContentBased Search for Deep Generative Models ; The growing proliferation of pretrained generative models has made it infeasible for a user to be fully cognizant of every model in existence. To address this need, we introduce the task of contentbased model search given a query and a large set of generative models, find the models that best match the query. Because each generative model produces a distribution of images, we formulate the search problem as an optimization to maximize the probability of generating a query match given a model. We develop approximations to make this problem tractable when the query is an image, a sketch, a text description, another generative model, or a combination of the above. We benchmark our method in both accuracy and speed over a set of generative models. We demonstrate that our model search retrieves suitable models for image editing and reconstruction, fewshot transfer learning, and latent space interpolation. Finally, we deploy our search algorithm to our online generative modelsharing platform at httpsmodelverse.cs.cmu.edu.
Iterative Approximate Byzantine Consensus under a Generalized Fault Model ; In this work, we consider a generalized fault model that can be used to represent a wide range of failure scenarios, including correlated failures and nonuniform node reliabilities. This fault model is general in the sense that fault models studied in prior related work, such as f total and f local models, are special cases of the generalized fault model. Under the generalized fault model, we explore iterative approximate Byzantine consensus IABC algorithms in arbitrary directed networks. We prove a necessary and sufficient condition for the existence of IABC algorithms. The use of the generalized fault model helps to gain a better understanding of IABC algorithms.
Quantitative Computation Tree Logic Model Checking Based on Generalized Possibility Measures ; We study generalized possibilistic computation tree logic model checking in this paper, which is an extension of possibilistic computation logic model checking introduced by Y.Li, Y.Li and Z.Ma 2014. The system is modeled by generalized possibilistic Kripke structures GPKS, in short, and the verifying property is specified by a generalized possibilistic computation tree logic GPoCTL, in short formula. Based on generalized possibility measures and generalized necessity measures, the method of generalized possibilistic computation tree logic model checking is discussed, and the corresponding algorithm and its complexity are shown in detail. Furthermore, the comparison between PoCTL introduced in 2013 and GPoCTL is given. Finally, a thermostat example is given to illustrate the GPoCTL modelchecking method.
Fingerprints of Generative Models in the Frequency Domain ; It is verified in existing works that CNNbased generative models leave unique fingerprints on generated images. There is a lack of analysis about how they are formed in generative models. Interpreting network components in the frequency domain, we derive sources for frequency distribution and gridlike pattern discrepancies exhibited on the spectrum. These insights are leveraged to develop lowcost synthetic models, which generate images emulating the frequency patterns observed in real generative models. The resulting fingerprint extractor pretrained on synthetic data shows superior transferability in verifying, identifying, and analyzing the relationship of real CNNbased generative models such as GAN, VAE, Flow, and diffusion.
Towards quantitative methods to assess network generative models ; Assessing generative models is not an easy task. Generative models should synthesize graphs which are not replicates of real networks but show topological features similar to real graphs. We introduce an approach for assessing graph generative models using graph classifiers. The inability of an established graph classifier for distinguishing real and synthesized graphs could be considered as a performance measurement for graph generators.
Deep Evolutionary Learning for Molecular Design ; In this paper, we propose a deep evolutionary learning DEL process that integrates fragmentbased deep generative model and multiobjective evolutionary computation for molecular design. Our approach enables 1 evolutionary operations in the latent space of the generative model, rather than the structural space, to generate novel promising molecular structures for the next evolutionary generation, and 2 generative model finetuning using newly generated highquality samples. Thus, DEL implements a datamodel coevolution concept which improves both sample population and generative model learning. Experiments on two public datasets indicate that sample population obtained by DEL exhibits improved property distributions, and dominates samples generated by multiobjective Bayesian optimization algorithms.
CanvasGAN A simple baseline for text to image generation by incrementally patching a canvas ; We propose a new recurrent generative model for generating images from text captions while attending on specific parts of text captions. Our model creates images by incrementally adding patches on a canvas while attending on words from text caption at each timestep. Finally, the canvas is passed through an upscaling network to generate images. We also introduce a new method for generating visualsemantic sentence embeddings based on selfattention over text. We compare our model's generated images with those generated Reed et. al.'s model and show that our model is a stronger baseline for text to image generation tasks.
Multimodal Controller for Generative Models ; Classconditional generative models are crucial tools for data generation from userspecified class labels. Existing approaches for classconditional generative models require nontrivial modifications of backbone generative architectures to model conditional information fed into the model. This paper introduces a plugandplay module named multimodal controller' to generate multimodal data without introducing additional learning parameters. In the absence of the controllers, our model reduces to nonconditional generative models. We test the efficacy of multimodal controllers on CIFAR10, COIL100, and Omniglot benchmark datasets. We demonstrate that multimodal controlled generative models including VAE, PixelCNN, Glow, and GAN can generate classconditional images of significantly better quality when compared with conditional generative models. Moreover, we show that multimodal controlled models can also create novel modalities of images.
Generative Diffusion Models on Graphs Methods and Applications ; Diffusion models, as a novel generative paradigm, have achieved remarkable success in various image generation tasks such as image inpainting, imagetotext translation, and video generation. Graph generation is a crucial computational task on graphs with numerous realworld applications. It aims to learn the distribution of given graphs and then generate new graphs. Given the great success of diffusion models in image generation, increasing efforts have been made to leverage these techniques to advance graph generation in recent years. In this paper, we first provide a comprehensive overview of generative diffusion models on graphs, In particular, we review representative algorithms for three variants of graph diffusion models, i.e., Score Matching with Langevin Dynamics SMLD, Denoising Diffusion Probabilistic Model DDPM, and Scorebased Generative Model SGM. Then, we summarize the major applications of generative diffusion models on graphs with a specific focus on molecule and protein modeling. Finally, we discuss promising directions in generative diffusion models on graphstructured data. For this survey, we also created a GitHub project website by collecting the supporting resources for generative diffusion models on graphs, at the link httpsgithub.comChengyiLIUcsGenerativeDiffusionModelsonGraphs
A Note on an R2 Measure for Fixed Effects in the Generalized Linear Mixed Model ; Using the LRT statistic, a model R2 is proposed for the generalized linear mixed model for assessing the association between the correlated outcomes and fixed effects. The R2 compares the full model to a null model with all fixed effects deleted.
Generative Neurosymbolic Machines ; Reconciling symbolic and distributed representations is a crucial challenge that can potentially resolve the limitations of current deep learning. Remarkable advances in this direction have been achieved recently via generative objectcentric representation models. While learning a recognition model that infers objectcentric symbolic representations like bounding boxes from raw images in an unsupervised way, no such model can provide another important ability of a generative model, i.e., generating sampling according to the structure of learned world density. In this paper, we propose Generative Neurosymbolic Machines, a generative model that combines the benefits of distributed and symbolic representations to support both structured representations of symbolic components and densitybased generation. These two crucial properties are achieved by a twolayer latent hierarchy with the global distributed latent for flexible density modeling and the structured symbolic latent map. To increase the model flexibility in this hierarchical structure, we also propose the StructDRAW prior. In experiments, we show that the proposed model significantly outperforms the previous structured representation models as well as the stateoftheart nonstructured generative models in terms of both structure accuracy and image generation quality. Our code, datasets, and trained models are available at httpsgithub.comJindongJiangGNM
Twist Decoding Diverse Generators Guide Each Other ; Many language generation models are now available for a wide range of generation tasks, including machine translation and summarization. Combining such diverse models may lead to further progress, but ensembling generation models is challenging during inference conventional ensembling methods e.g., shallow fusion require that the models share vocabularytokenization schemes. We introduce Twist decoding, a simple and general text generation algorithm that benefits from diverse models at inference time. Our method does not assume the vocabulary, tokenization or even generation order is shared. Our extensive evaluations on machine translation and scientific paper summarization demonstrate that Twist decoding substantially outperforms each model decoded in isolation over various scenarios, including cases where domainspecific and generalpurpose models are both available. Twist decoding also consistently outperforms the popular reranking heuristic where output candidates from one model are rescored by another. We hope that our work will encourage researchers and practitioners to examine generation models collectively, not just independently, and to seek out models with complementary strengths to the currently available models. Our code is available at httpsgithub.comjungokasaitwistdecoding.
Boosted Generative Models ; We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our metaalgorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from modelgenerated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of our blackbox boosting algorithms on density estimation, classification, and sample generation on benchmark datasets for a wide range of generative models.
Image Restoration A General Wavelet Frame Based Model and Its Asymptotic Analysis ; Image restoration is one of the most important areas in imaging science. Mathematical tools have been widely used in image restoration, where wavelet frame based approach is one of the successful examples. In this paper, we introduce a generic wavelet frame based image restoration model, called the general model, which includes most of the existing wavelet frame based models as special cases. Moreover, the general model also includes examples that are new to the literature. Motivated by our earlier studies 13, We provide an asymptotic analysis of the general model as image resolution goes to infinity, which establishes a connection between the general model in discrete setting and a new variatonal model in continuum setting. The variational model also includes some of the existing variational models as special cases, such as the total generalized variational model proposed by 4. In the end, we introduce an algorithm solving the general model and present one numerical simulation as an example.
Latent Variable Dialogue Models and their Diversity ; We present a dialogue generation model that directly captures the variability in possible responses to a given input, which reduces the boring output' issue of deterministic dialogue models. Experiments show that our model generates more diverse outputs than baseline models, and also generates more consistently acceptable output than sampling from a deterministic encoderdecoder model.
Note on the equivalence of hierarchical variational models and auxiliary deep generative models ; This note compares two recently published machine learning methods for constructing flexible, but tractable families of variational hiddenvariable posteriors. The first method, called hierarchical variational models enriches the inference model with an extra variable, while the other, called auxiliary deep generative models, enriches the generative model instead. We conclude that the two methods are mathematically equivalent.
ChatGPT is not all you need. A State of the Art Review of large Generative AI models ; During the last two years there has been a plethora of large generative models such as ChatGPT or Stable Diffusion that have been published. Concretely, these models are able to perform tasks such as being a general question and answering system or automatically creating artistic images that are revolutionizing several sectors. Consequently, the implications that these generative models have in the industry and society are enormous, as several job positions may be transformed. For example, Generative AI is capable of transforming effectively and creatively texts to images, like the DALLE2 model; text to 3D images, like the Dreamfusion model; images to text, like the Flamingo model; texts to video, like the Phenaki model; texts to audio, like the AudioLM model; texts to other texts, like ChatGPT; texts to code, like the Codex model; texts to scientific texts, like the Galactica model or even create algorithms like AlphaTensor. This work consists on an attempt to describe in a concise way the main models are sectors that are affected by generative AI and to provide a taxonomy of the main generative models published recently.
Training Discriminative Models to Evaluate Generative Ones ; Generative models are known to be difficult to assess. Recent works, especially on generative adversarial networks GANs, produce good visual samples of varied categories of images. However, the validation of their quality is still difficult to define and there is no existing agreement on the best evaluation process. This paper aims at making a step toward an objective evaluation process for generative models. It presents a new method to assess a trained generative model by evaluating the test accuracy of a classifier trained with generated data. The test set is composed of real images. Therefore, The classifier accuracy is used as a proxy to evaluate if the generative model fit the true data distribution. By comparing results with different generated datasets we are able to classify and compare generative models. The motivation of this approach is also to evaluate if generative models can help discriminative neural networks to learn, i.e., measure if training on generated data is able to make a model successful at testing on real settings. Our experiments compare different generators from the Variational AutoEncoders VAE and Generative Adversarial Network GAN frameworks on MNIST and fashion MNIST datasets. Our results show that none of the generative models is able to replace completely true data to train a discriminative model. But they also show that the initial GAN and WGAN are the best choices to generate on MNIST database Modified National Institute of Standards and Technology database and fashion MNIST database.
Generalization of the RandallSundrum Model Using Gravitational Model FT, ; In this letter, we explore a generalized model based on two scenarios including the RandallSundrum model and Gravity model FT,Theta. We first study the standard RandallSundrum Gravitational model and then add a function containing two parameters as torsion and trace energymomentum tensor to the main action of the model. Next, we derive the equations of the generalized model and obtain a new critical value for the energy density of the brane. The results showed that inflation and the dark energy dominated stage can be realized in this model.
OMSDPM Optimizing the Model Schedule for Diffusion Probabilistic Models ; Diffusion probabilistic models DPMs are a new class of generative models that have achieved stateoftheart generation quality in various domains. Despite the promise, one major drawback of DPMs is the slow generation speed due to the large number of neural network evaluations required in the generation process. In this paper, we reveal an overlooked dimension model schedule for optimizing the tradeoff between generation quality and speed. More specifically, we observe that small models, though having worse generation quality when used alone, could outperform large models in certain generation steps. Therefore, unlike the traditional way of using a single model, using different models in different generation steps in a carefully designed emphmodel schedule could potentially improve generation quality and speed emphsimultaneously. We design OMSDPM, a predictorbased search algorithm, to optimize the model schedule given an arbitrary generation time budget and a set of pretrained models. We demonstrate that OMSDPM can find model schedules that improve generation quality and speed than prior stateoftheart methods across CIFAR10, CelebA, ImageNet, and LSUN datasets. When applied to the public checkpoints of the Stable Diffusion model, we are able to accelerate the sampling by 2times while maintaining the generation quality.
Learning to Generate Lumped Hydrological Models ; In a lumped hydrological model structure, the hydrological function of a catchment is characterized by only a few parameters. Given a set of parameter values, a numerical function useful for hydrological prediction is generated. Thus, this study assumes that the hydrological function of a catchment can be sufficiently well characterized by a small number of latent variables. By specifying the variable values, a numerical function resembling the hydrological function of a realworld catchment can be generated using a generative model. In this study, a deep learning method is used to learn both the generative model and the latent variable values of different catchments directly from their climate forcing and runoff data, without using catchment attributes. The generative models can be used similarly to a lumped model structure, i.e., by estimating the optimal parameter or latent variable values using a generic model calibration algorithm, an optimal numerical model can be derived. In this study, generative models using eight latent variables were learned from data from over 3,000 catchments worldwide, and the learned generative models were applied to model over 700 different catchments using a generic calibration algorithm. The quality of the resulting optimal models was generally comparable to or better than that obtained using 36 different types of lump model structures or using nongenerative deep learning methods. In summary, this study presents a datadriven approach for representing the hydrological function of a catchment in lowdimensional space and a method for reconstructing specific hydrological functions from the representations.
How Do Neural Sequence Models Generalize Local and Global Context Cues for OutofDistribution Prediction ; After a neural sequence model encounters an unexpected token, can its behavior be predicted We show that RNN and transformer language models exhibit structured, consistent generalization in outofdistribution contexts. We begin by introducing two idealized models of generalization in nextword prediction a local context model in which generalization is consistent with the last word observed, and a global context model in which generalization is consistent with the global structure of the input. In experiments in English, Finnish, Mandarin, and random regular languages, we demonstrate that neural language models interpolate between these two forms of generalization their predictions are wellapproximated by a loglinear combination of local and global predictive distributions. We then show that, in some languages, noise mediates the two forms of generalization noise applied to input tokens encourages global generalization, while noise in history representations encourages local generalization. Finally, we offer a preliminary theoretical explanation of these results by proving that the observed interpolation behavior is expected in loglinear models with a particular feature correlation structure. These results help explain the effectiveness of two popular regularization schemes and show that aspects of sequence model generalization can be understood and controlled.
ScoreBased Generative Models for Molecule Generation ; Recent advances in generative models have made exploring design spaces easier for de novo molecule generation. However, popular generative models like GANs and normalizing flows face challenges such as training instabilities due to adversarial training and architectural constraints, respectively. Scorebased generative models sidestep these challenges by modelling the gradient of the log probability density using a score function approximation, as opposed to modelling the density function directly, and sampling from it using annealed Langevin Dynamics. We believe that scorebased generative models could open up new opportunities in molecule generation due to their architectural flexibility, such as replacing the score function with an SE3 equivariant model. In this work, we lay the foundations by testing the efficacy of scorebased models for molecule generation. We train a Transformerbased score function on SelfReferencing Embedded Strings SELFIES representations of 1.5 million samples from the ZINC dataset and use the Moses benchmarking framework to evaluate the generated samples on a suite of metrics.
FGAMFast Adversarial Malware Generation Method Based on Gradient Sign ; Malware detection models based on deep learning have been widely used, but recent research shows that deep learning models are vulnerable to adversarial attacks. Adversarial attacks are to deceive the deep learning model by generating adversarial samples. When adversarial attacks are performed on the malware detection model, the attacker will generate adversarial malware with the same malicious functions as the malware, and make the detection model classify it as benign software. Studying adversarial malware generation can help model designers improve the robustness of malware detection models. At present, in the work on adversarial malware generation for bytetoimage malware detection models, there are mainly problems such as large amount of injection perturbation and low generation efficiency. Therefore, this paper proposes FGAM Fast Generate Adversarial Malware, a method for fast generating adversarial malware, which iterates perturbed bytes according to the gradient sign to enhance adversarial capability of the perturbed bytes until the adversarial malware is successfully generated. It is experimentally verified that the success rate of the adversarial malware deception model generated by FGAM is increased by about 84 compared with existing methods.
DiffAR Denoising Diffusion Autoregressive Model for Raw Speech Waveform Generation ; Diffusion models have recently been shown to be relevant for highquality speech generation. Most work has been focused on generating spectrograms, and as such, they further require a subsequent model to convert the spectrogram to a waveform i.e., a vocoder. This work proposes a diffusion probabilistic endtoend model for generating a raw speech waveform. The proposed model is autoregressive, generating overlapping frames sequentially, where each frame is conditioned on a portion of the previously generated one. Hence, our model can effectively synthesize an unlimited speech duration while preserving highfidelity synthesis and temporal coherence. We implemented the proposed model for unconditional and conditional speech generation, where the latter can be driven by an input sequence of phonemes, amplitudes, and pitch values. Working on the waveform directly has some empirical advantages. Specifically, it allows the creation of local acoustic behaviors, like vocal fry, which makes the overall waveform sounds more natural. Furthermore, the proposed diffusion model is stochastic and not deterministic; therefore, each inference generates a slightly different waveform variation, enabling abundance of valid realizations. Experiments show that the proposed model generates speech with superior quality compared with other stateoftheart neural speech generation systems.
Unifying GANs and ScoreBased Diffusion as Generative Particle Models ; Particlebased deep generative models, such as gradient flows and scorebased diffusion models, have recently gained traction thanks to their striking performance. Their principle of displacing particle distributions by differential equations is conventionally seen as opposed to the previously widespread generative adversarial networks GANs, which involve training a pushforward generator network. In this paper, we challenge this interpretation and propose a novel framework that unifies particle and adversarial generative models by framing generator training as a generalization of particle models. This suggests that a generator is an optional addition to any such generative model. Consequently, integrating a generator into a scorebased diffusion model and training a GAN without a generator naturally emerge from our framework. We empirically test the viability of these original models as proofs of concepts of potential applications of our framework.
Towards Personalized PromptModel Retrieval for Generative Recommendation ; Recommender Systems are built to retrieve relevant items to satisfy users' information needs. The candidate corpus usually consists of a finite set of items that are ready to be served, such as videos, products, or articles. With recent advances in Generative AI such as GPT and Diffusion models, a new form of recommendation task is yet to be explored where items are to be created by generative models with personalized prompts. Taking image generation as an example, with a single prompt from the user and access to a generative model, it is possible to generate hundreds of new images in a few minutes. How shall we attain personalization in the presence of infinite items In this preliminary study, we propose a twostage framework, namely PromptModel Retrieval and Generated Item Ranking, to approach this new task formulation. We release GEMRec18K, a promptmodel interaction dataset with 18K images generated by 200 publiclyavailable generative models paired with a diverse set of 90 textual prompts. Our findings demonstrate the promise of generative model recommendation as a novel personalization problem and the limitations of existing evaluation metrics. We highlight future directions for the RecSys community to advance towards generative recommender systems. Our code and dataset are available at httpsgithub.comMAPSresearchGEMRec.
Securing Deep Generative Models with Universal Adversarial Signature ; Recent advances in deep generative models have led to the development of methods capable of synthesizing highquality, realistic images. These models pose threats to society due to their potential misuse. Prior research attempted to mitigate these threats by detecting generated images, but the varying traces left by different generative models make it challenging to create a universal detector capable of generalizing to new, unseen generative models. In this paper, we propose to inject a universal adversarial signature into an arbitrary pretrained generative model, in order to make its generated contents more detectable and traceable. First, the imperceptible optimal signature for each image can be found by a signature injector through adversarial training. Subsequently, the signature can be incorporated into an arbitrary generator by finetuning it with the images processed by the signature injector. In this way, the detector corresponding to the signature can be reused for any finetuned generator for tracking the generator identity. The proposed method is validated on the FFHQ and ImageNet datasets with various stateoftheart generative models, consistently showing a promising detection rate. Code will be made publicly available at urlhttpsgithub.comzengxianyugenwm.
Evaluating Generative Models for GraphtoText Generation ; Large language models LLMs have been widely employed for graphtotext generation tasks. However, the process of finetuning LLMs requires significant training resources and annotation work. In this paper, we explore the capability of generative models to generate descriptive text from graph data in a zeroshot setting. Specifically, we evaluate GPT3 and ChatGPT on two graphtotext datasets and compare their performance with that of finetuned LLM models such as T5 and BART. Our results demonstrate that generative models are capable of generating fluent and coherent text, achieving BLEU scores of 10.57 and 11.08 for the AGENDA and WebNLG datasets, respectively. However, our error analysis reveals that generative models still struggle with understanding the semantic relations between entities, and they also tend to generate text with hallucinations or irrelevant information. As a part of error analysis, we utilize BERT to detect machinegenerated text and achieve high macroF1 scores. We have made the text generated by generative models publicly available.
Learning to Make Predictions In Partially Observable Environments Without a Generative Model ; When faced with the problem of learning a model of a highdimensional environment, a common approach is to limit the model to make only a restricted set of predictions, thereby simplifying the learning problem. These partial models may be directly useful for making decisions or may be combined together to form a more complete, structured model. However, in partially observable nonMarkov environments, standard modellearning methods learn generative models, i.e. models that provide a probability distribution over all possible futures such as POMDPs. It is not straightforward to restrict such models to make only certain predictions, and doing so does not always simplify the learning problem. In this paper we present prediction profile models nongenerative partial models for partially observable systems that make only a given set of predictions, and are therefore far simpler than generative models in some cases. We formalize the problem of learning a prediction profile model as a transformation of the original modellearning problem, and show empirically that one can learn prediction profile models that make a small set of important predictions even in systems that are too complex for standard generative models.
Deep Generative Models for Vehicle Speed Trajectories ; Generating realistic vehicle speed trajectories is a crucial component in evaluating vehicle fuel economy and in predictive control of selfdriving cars. Traditional generative models rely on Markov chain methods and can produce accurate synthetic trajectories but are subject to the curse of dimensionality. They do not allow to include conditional input variables into the generation process. In this paper, we show how extensions to deep generative models allow accurate and scalable generation. Proposed architectures involve recurrent and feedforward layers and are trained using adversarial techniques. Our models are shown to perform well on generating vehicle trajectories using a model trained on GPS data from Chicago metropolitan area.
Paths of FriedmannRobertsonWalker brane models ; Dynamics of braneworld models of dark energy is reviewed. We demonstrate that simple dark energy brane models can be represented as 2dimensional dynamical systems of a Newtonian type. Hence a fictitious particle moving in a potential well characterizes the model. We investigate the dynamics of the brane models using methods of dynamical systems. The simple braneworld models can be successfully unified within a single scheme an ensemble of brane dark energy models. We characterize generic models of this ensemble as well as exceptional ones using the notion of structural stability instability. Then due to the Peixoto theorem we can characterize the class of generic brane models. We show that global dynamics of the generic brane models of dark energy is topologically equivalent to the concordance LambdaCDM model. We also demonstrate that the bouncing models or models in which acceleration of the universe is only transient phenomenon are nongeneric or exceptional cases in the ensemble. We argue that the adequate brane model of dark energy should be a generic case in the ensemble of FRW dynamical systems on the plane.
Distilling the Knowledge of Largescale Generative Models into Retrieval Models for Efficient Opendomain Conversation ; Despite the remarkable performance of largescale generative models in opendomain conversation, they are known to be less practical for building realtime conversation systems due to high latency. On the other hand, retrieval models could return responses with much lower latency but show inferior performance to the largescale generative models since the conversation quality is bounded by the predefined response set. To take advantage of both approaches, we propose a new training method called G2R GenerativetoRetrieval distillation that preserves the efficiency of a retrieval model while leveraging the conversational ability of a largescale generative model by infusing the knowledge of the generative model into the retrieval model. G2R consists of two distinct techniques of distillation the datalevel G2R augments the dialogue dataset with additional responses generated by the largescale generative model, and the modellevel G2R transfers the response quality score assessed by the generative model to the score of the retrieval model by the knowledge distillation loss. Through extensive experiments including human evaluation, we demonstrate that our retrievalbased conversation system trained with G2R shows a substantially improved performance compared to the baseline retrieval model while showing significantly lower inference latency than the largescale generative models.
More on Generalized Heisenberg Ferromagnet Models ; We generalize the integrable Heisenberg ferromagnet model according to each Hermitian symmetric spaces and address various new aspects of the generalized model. Using the first order formalism of generalized spins which are defined on the coadjoint orbits of arbitrary groups, we construct a Lagrangian of the generalized model from which we obtain the Hamiltonian structure explicitly in the case of CPN1 orbit. The gauge equivalence between the generalized Heisenberg ferromagnet and the nonlinear Schrodinger models is given. Using the equivalence, we find infinitely many conserved integrals of both models.
On the Cofibrant Generation of Model Categories ; The paper studies the problem of the cofibrant generation of a model category. We prove that, assuming Vopvenka's principle, every cofibrantly generated model category is Quillen equivalent to a combinatorial model category. We discuss cases where this result implies that the class of weak equivalences in a cofibrantly generated model category is accessibly embedded. We also prove a necessary condition for a model category to be cofibrantly generated by a set of generating cofibrations between cofibrant objects.
Generating Images from Captions with Attention ; Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions. The proposed model iteratively draws patches on a canvas, while attending to the relevant words in the description. After training on Microsoft COCO, we compare our model with several baseline generative models on image generation and retrieval tasks. We demonstrate that our model produces higher quality samples than other approaches and generates images with novel scene compositions corresponding to previously unseen captions in the dataset.
Formal Context Generation using Dirichlet Distributions ; We suggest an improved way to randomly generate formal contexts based on Dirichlet distributions. For this purpose we investigate the predominant way to generate formal contexts, a cointossing model, recapitulate some of its shortcomings and examine its stochastic model. Building up on this we propose our Dirichlet model and develop an algorithm employing this idea. By comparing our generation model to a cointossing model we show that our approach is a significant improvement with respect to the variety of contexts generated. Finally, we outline a possible application in null model generation for formal contexts.
Diffusion models for Handwriting Generation ; In this paper, we propose a diffusion probabilistic model for handwriting generation. Diffusion models are a class of generative models where samples start from Gaussian noise and are gradually denoised to produce output. Our method of handwriting generation does not require using any textrecognition based, writerstyle based, or adversarial loss functions, nor does it require training of auxiliary networks. Our model is able to incorporate writer stylistic features directly from image data, eliminating the need for user interaction during sampling. Experiments reveal that our model is able to generate realistic , high quality images of handwritten text in a similar style to a given writer. Our implementation can be found at httpsgithub.comtcl9876DiffusionHandwritingGeneration
Text Generation Based on Generative Adversarial Nets with Latent Variable ; In this paper, we propose a model using generative adversarial net GAN to generate realistic text. Instead of using standard GAN, we combine variational autoencoder VAE with generative adversarial net. The use of highlevel latent random variables is helpful to learn the data distribution and solve the problem that generative adversarial net always emits the similar data. We propose the VGAN model where the generative model is composed of recurrent neural network and VAE. The discriminative model is a convolutional neural network. We train the model via policy gradient. We apply the proposed model to the task of text generation and compare it to other recent neural network based models, such as recurrent neural network language model and SeqGAN. We evaluate the performance of the model by calculating negative loglikelihood and the BLEU score. We conduct experiments on three benchmark datasets, and results show that our model outperforms other previous models.
Consistency Models ; Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast onestep generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zeroshot data editing, such as image inpainting, colorization, and superresolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pretrained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one and fewstep sampling, achieving the new stateoftheart FID of 3.55 on CIFAR10 and 6.20 on ImageNet 64x64 for onestep generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing onestep, nonadversarial generative models on standard benchmarks such as CIFAR10, ImageNet 64x64 and LSUN 256x256.
Dirichlet Diffusion Score Model for Biological Sequence Generation ; Designing biological sequences is an important challenge that requires satisfying complex constraints and thus is a natural problem to address with deep generative modeling. Diffusion generative models have achieved considerable success in many applications. Scorebased generative stochastic differential equations SDE model is a continuoustime diffusion model framework that enjoys many benefits, but the originally proposed SDEs are not naturally designed for modeling discrete data. To develop generative SDE models for discrete data such as biological sequences, here we introduce a diffusion process defined in the probability simplex space with stationary distribution being the Dirichlet distribution. This makes diffusion in continuous space natural for modeling discrete data. We refer to this approach as Dirchlet diffusion score model. We demonstrate that this technique can generate samples that satisfy hard constraints using a Sudoku generation task. This generative model can also solve Sudoku, including hard puzzles, without additional training. Finally, we applied this approach to develop the first human promoter DNA sequence design model and showed that designed sequences share similar properties with natural promoter sequences.
A Conditional Generative Chatbot using Transformer Model ; A Chatbot serves as a communication tool between a human user and a machine to achieve an appropriate answer based on the human input. In more recent approaches, a combination of Natural Language Processing and sequential models are used to build a generative Chatbot. The main challenge of these models is their sequential nature, which leads to less accurate results. To tackle this challenge, in this paper, a novel architecture is proposed using conditional Wasserstein Generative Adversarial Networks and a transformer model for answer generation in Chatbots. While the generator of the proposed model consists of a full transformer model to generate an answer, the discriminator includes only the encoder part of a transformer model followed by a classifier. To the best of our knowledge, this is the first time that a generative Chatbot is proposed using the embedded transformer in both generator and discriminator models. Relying on the parallel computing of the transformer model, the results of the proposed model on the Cornell MovieDialog corpus and the ChitChat datasets confirm the superiority of the proposed model compared to stateoftheart alternatives using different evaluation metrics.
Paired 3D Model Generation with Conditional Generative Adversarial Networks ; Generative Adversarial Networks GANs are shown to be successful at generating new and realistic samples including 3D object models. Conditional GAN, a variant of GANs, allows generating samples in given conditions. However, objects generated for each condition are different and it does not allow generation of the same object in different conditions. In this paper, we first adapt conditional GAN, which is originally designed for 2D image generation, to the problem of generating 3D models in different rotations. We then propose a new approach to guide the network to generate the same 3D sample in different and controllable rotation angles sample pairs. Unlike previous studies, the proposed method does not require modification of the standard conditional GAN architecture and it can be integrated into the training step of any conditional GAN. Experimental results and visual comparison of 3D models show that the proposed method is successful at generating model pairs in different conditions.
MoFlow An Invertible Flow Model for Generating Molecular Graphs ; Generating molecular graphs with desired chemical properties driven by deep graph generative models provides a very promising way to accelerate drug discovery process. Such graph generative models usually consist of two steps learning latent representations and generation of molecular graphs. However, to generate novel and chemicallyvalid molecular graphs from latent representations is very challenging because of the chemical constraints and combinatorial complexity of molecular graphs. In this paper, we propose MoFlow, a flowbased graph generative model to learn invertible mappings between molecular graphs and their latent representations. To generate molecular graphs, our MoFlow first generates bonds edges through a Glow based model, then generates atoms nodes given bonds by a novel graph conditional flow, and finally assembles them into a chemically valid molecular graph with a posthoc validity correction. Our MoFlow has merits including exact and tractable likelihood training, efficient onepass embedding and generation, chemical validity guarantees, 100 reconstruction of training data, and good generalization ability. We validate our model by four tasks molecular graph generation and reconstruction, visualization of the continuous latent space, property optimization, and constrained property optimization. Our MoFlow achieves stateoftheart performance, which implies its potential efficiency and effectiveness to explore large chemical space for drug discovery.
Will Largescale Generative Models Corrupt Future Datasets ; Recently proposed largescale texttoimage generative models such as DALLcdotE 2, Midjourney, and StableDiffusion can generate highquality and realistic images from users' prompts. Not limited to the research community, ordinary Internet users enjoy these generative models, and consequently, a tremendous amount of generated images have been shared on the Internet. Meanwhile, today's success of deep learning in the computer vision field owes a lot to images collected from the Internet. These trends lead us to a research question textbfwill such generated images impact the quality of future datasets and the performance of computer vision models positively or negatively This paper empirically answers this question by simulating contamination. Namely, we generate ImageNetscale and COCOscale datasets using a stateoftheart generative model and evaluate models trained with contaminated datasets on various tasks, including image classification and image generation. Throughout experiments, we conclude that generated images negatively affect downstream performance, while the significance depends on tasks and the amount of generated images. The generated datasets and the codes for experiments will be publicly released for future research. Generated datasets and source codes are available from urlhttpsgithub.commoskomuledatasetcontamination.
Conditional MoCoGAN for ZeroShot Video Generation ; We propose a conditional generative adversarial network GAN model for zeroshot video generation. In this study, we have explored zeroshot conditional generation setting. In other words, we generate unseen videos from training samples with missing classes. The task is an extension of conditional data generation. The key idea is to learn disentangled representations in the latent space of a GAN. To realize this objective, we base our model on the motion and content decomposed GAN and conditional GAN for image generation. We build the model to find betterdisentangled representations and to generate goodquality videos. We demonstrate the effectiveness of our proposed model through experiments on the Weizmann action database and the MUG facial expression database.
Safer Together Machine Learning Models Trained on Shared Accident Datasets Predict Construction Injuries Better than CompanySpecific Models ; In this study, we capitalized on a collective dataset repository of 57k accidents from 9 companies belonging to 3 domains and tested whether models trained on multiple datasets generic models predicted safety outcomes better than the companyspecific models. We experimented with full generic models trained on all data, perdomain generic models construction, electric TD, oil gas, and with ensembles of generic and specific models. Results are very positive, with generic models outperforming the companyspecific models in most cases while also generating finergrained, hence more useful, forecasts. Successful generic models remove the needs for training companyspecific models, saving a lot of time and resources, and give small companies, whose accident datasets are too limited to train their own models, access to safety outcome predictions. It may still however be advantageous to train specific models to get an extra boost in performance through ensembling with the generic models. Overall, by learning lessons from a pool of datasets whose accumulated experience far exceeds that of any single company, and making these lessons easily accessible in the form of simple forecasts, generic models tackle the holy grail of safety crossorganizational learning and dissemination in the construction industry.
On model structure for coreflective subcategories of a model category ; Let bf C be a coreflective subcategory of a cofibrantly generated model category bf D. In this paper we show that under suitable conditions bf C admits a cofibrantly generated model structure which is left Quillen adjunct to the model structure on bf D. As an application, we prove that wellknown convenient categories of topological spaces, such as kspaces, compactly generated spaces, and Deltagenerated spaces citeDN called numerically generated in citeKKH admit a finitely generated model structure which is Quillen equivalent to the standard model structure on the category bf Top of topological spaces.
Are generative deep models for novelty detection truly better ; Many deep models have been recently proposed for anomaly detection. This paper presents comparison of selected generative deep models and classical anomaly detection methods on an extensive number of nonimage benchmark datasets. We provide statistical comparison of the selected models, in many configurations, architectures and hyperparamaters. We arrive to conclusion that performance of the generative models is determined by the process of selection of their hyperparameters. Specifically, performance of the deep generative models deteriorates with decreasing amount of anomalous samples used in hyperparameter selection. In practical scenarios of anomaly detection, none of the deep generative models systematically outperforms the kNN.
Generalization and Memorization The Bias Potential Model ; Models for learning probability distributions such as generative models and density estimators behave quite differently from models for learning functions. One example is found in the memorization phenomenon, namely the ultimate convergence to the empirical distribution, that occurs in generative adversarial networks GANs. For this reason, the issue of generalization is more subtle than that for supervised learning. For the bias potential model, we show that dimensionindependent generalization accuracy is achievable if early stopping is adopted, despite that in the long term, the model either memorizes the samples or diverges.
Visual Conceptual Blending with Largescale Language and Vision Models ; We ask the question to what extent can recent largescale language and image generation models blend visual concepts Given an arbitrary object, we identify a relevant object and generate a singlesentence description of the blend of the two using a language model. We then generate a visual depiction of the blend using a textbased image generation model. Quantitative and qualitative evaluations demonstrate the superiority of language models over classical methods for conceptual blending, and of recent largescale image generation models over prior models for the visual depiction.
Generative Adversarial Imitation Learning for Empathybased AI ; Generative adversarial imitation learning GAIL is a modelfree algorithm that has been shown to provide strong results in imitating complex behaviors in highdimensional environments. In this paper, we utilize the GAIL model for text generation to develop empathybased contextaware conversational AI. Our model uses an expert trajectory of empathetic promptresponse dialogues which can accurately exhibit the correct empathetic emotion when generating a response. The Generator of the GAIL model uses the GPT2 sequential pretrained language model trained on 117 million parameters from 40 GB of internet data. We propose a novel application of an approach used in transfer learning to fine tune the GPT2 model in order to generate concise, userspecific empathetic responses validated against the Discriminator. Our novel GAIL model utilizes a sentiment analysis historybased reinforcement learning approach to empathetically respond to human interactions in a personalized manner. We find that our model's response scores on various humangenerated prompts collected from the Facebook Empathetic Dialogues dataset outperform baseline counterparts. Moreover, our model improves upon various historybased conversational AI models developed recently, as our model's performance over a sustained conversation of 3 or more interactions outperform similar conversational AI models.
Towards an Automatic Optimisation Model Generator Assisted with Generative Pretrained Transformer ; This article presents a framework for generating optimisation models using a pretrained generative transformer. The framework involves specifying the features that the optimisation model should have and using a language model to generate an initial version of the model. The model is then tested and validated, and if it contains build errors, an automatic edition process is triggered. An experiment was performed using MiniZinc as the target language and two GPT3.5 language models for generation and debugging. The results show that the use of language models for the generation of optimisation models is feasible, with some models satisfying the requested specifications, while others require further refinement. The study provides promising evidence for the use of language models in the modelling of optimisation problems and suggests avenues for future research.
SkeletontoResponse Dialogue Generation Guided by Retrieval Memory ; For dialogue response generation, traditional generative models generate responses solely from input queries. Such models rely on insufficient information for generating a specific response since a certain query could be answered in multiple ways. Consequentially, those models tend to output generic and dull responses, impeding the generation of informative utterances. Recently, researchers have attempted to fill the information gap by exploiting information retrieval techniques. When generating a response for a current query, similar dialogues retrieved from the entire training data are considered as an additional knowledge source. While this may harvest massive information, the generative models could be overwhelmed, leading to undesirable performance. In this paper, we propose a new framework which exploits retrieval results via a skeletonthenresponse paradigm. At first, a skeleton is generated by revising the retrieved responses. Then, a novel generative model uses both the generated skeleton and the original query for response generation. Experimental results show that our approaches significantly improve the diversity and informativeness of the generated responses.
Alterationfree and Modelagnostic Origin Attribution of Generated Images ; Recently, there has been a growing attention in image generation models. However, concerns have emerged regarding potential misuse and intellectual property IP infringement associated with these models. Therefore, it is necessary to analyze the origin of images by inferring if a specific image was generated by a particular model, i.e., origin attribution. Existing methods are limited in their applicability to specific types of generative models and require additional steps during training or generation. This restricts their use with pretrained models that lack these specific operations and may compromise the quality of image generation. To overcome this problem, we first develop an alterationfree and modelagnostic origin attribution method via input reverseengineering on image generation models, i.e., inverting the input of a particular model for a specific image. Given a particular model, we first analyze the differences in the hardness of reverseengineering tasks for the generated images of the given model and other images. Based on our analysis, we propose a method that utilizes the reconstruction loss of reverseengineering to infer the origin. Our proposed method effectively distinguishes between generated images from a specific generative model and other images, including those generated by different models and real images.
Equality conditions for internal entropies of certain classical and quantum models ; Mathematical models use information from past observations to generate predictions about the future. If two models make identical predictions the one that needs less information from the past to do this is preferred. It is already known that certain classical models certain Hidden Markov Models called epsilonmachines which are often optimal classical models are not in general the preferred ones. We extend this result and show that even optimal classical models models with minimal internal entropy in general are not the best possible models called ideal models. Instead of optimal classical models we can construct quantum models which are significantly better but not yet the best possible ones i.e. they have a strictly smaller internal entropy. In this paper we show conditions when the internal entropies between classical models and specific quantum models coincide. Furthermore it turns out that this situation appears very rarely. An example shows that our results hold only for the specific quantum model construction and in general not for alternative constructions. Furthermore another example shows that classical models with minimal internal entropy need not to be related to quantum models with minimal internal entropy.
Development of a Mathematical Model for HarborManeuvers to Realize Modeling Automation ; A simulation environment of harbor maneuvers is critical for developing automatic berthing. Dynamic models are widely used to estimate harbor maneuvers. However, human decisionmaking and data analysis are necessary to derive, select, and identify the model because each actuator configuration needs an inherent mathematical expression. We proposed a new dynamic model for arbitrary configurations to overcome that issue. The new model is a hybrid model that combines the simplicity of the derivation of the Taylor expansion and the high degree of freedom of the MMG lowspeed maneuvering model. We also developed a method to select mathematical expressions for the proposed model using system identification. Because the proposed model can easily derive mathematical expressions, we can generate multiple models simultaneously and choose the best one. This method can reduce the workload of model identification and selection. Furthermore, the proposed method will enable the automatic generation of dynamic models because it can reduce human decisionmaking and data analysis for the model generation due to its less dependency on the knowledge of ship hydrodynamics and captive model test. The proposed method was validated with freerunning model tests and showed equivalent or better estimation performance than the conventional model generation method.
A Modern Perspective on Query Likelihood with Deep Generative Retrieval Models ; Existing neural ranking models follow the text matching paradigm, where documenttoquery relevance is estimated through predicting the matching score. Drawing from the rich literature of classical generative retrieval models, we introduce and formalize the paradigm of deep generative retrieval models defined via the cumulative probabilities of generating query terms. This paradigm offers a grounded probabilistic view on relevance estimation while still enabling the use of modern neural architectures. In contrast to the matching paradigm, the probabilistic nature of generative rankers readily offers a finegrained measure of uncertainty. We adopt several current neural generative models in our framework and introduce a novel generative ranker TPGN, which combines the encoding capacity of Transformers with the Pointer Generator Network model. We conduct an extensive set of evaluation experiments on passage retrieval, leveraging the MS MARCO Passage Reranking and TREC Deep Learning 2019 Passage Reranking collections. Our results show the significantly higher performance of the TPGN model when compared with other generative models. Lastly, we demonstrate that exploiting the uncertainty information of deep generative rankers opens new perspectives to querycollection understanding, and significantly improves the cutoff prediction task.
Reverse Engineering Configurations of Neural Text Generation Models ; This paper seeks to develop a deeper understanding of the fundamental properties of neural text generations models. The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area. Previously, the extent and degree to which these artifacts surface in generated text has not been well studied. In the spirit of better understanding generative text models and their artifacts, we propose the new task of distinguishing which of several variants of a given model generated a piece of text, and we conduct an extensive suite of diagnostic tests to observe whether modeling choices e.g., sampling methods, topk probabilities, model architectures, etc. leave detectable artifacts in the text they generate. Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone. This suggests that neural text generators may be more sensitive to various modeling choices than previously thought.
3d human motion generation from the text via gesture action classification and the autoregressive model ; In this paper, a deep learningbased model for 3D human motion generation from the text is proposed via gesture action classification and an autoregressive model. The model focuses on generating special gestures that express human thinking, such as waving and nodding. To achieve the goal, the proposed method predicts expression from the sentences using a text classification model based on a pretrained language model and generates gestures using the gate recurrent unitbased autoregressive model. Especially, we proposed the loss for the embedding space for restoring raw motions and generating intermediate motions well. Moreover, the novel data augmentation method and stop token are proposed to generate variable length motions. To evaluate the text classification model and 3D human motion generation model, a gesture action classification dataset and actionbased gesture dataset are collected. With several experiments, the proposed method successfully generates perceptually natural and realistic 3D human motion from the text. Moreover, we verified the effectiveness of the proposed method using a publicavailable action recognition dataset to evaluate crossdataset generalization performance.
Gradient Estimation for Unseen Domain Risk Minimization with PreTrained Models ; Domain generalization aims to build generalized models that perform well on unseen domains when only source domains are available for model optimization. Recent studies have shown that largescale pretrained models can enhance domain generalization by leveraging their generalization power. However, these pretrained models lack target taskspecific knowledge yet due to discrepancies between the pretraining objectives and the target task. Although the taskspecific knowledge could be learned from source domains by finetuning, this hurts the generalization power of pretrained models due to gradient bias toward the source domains. To alleviate this problem, we propose a new domain generalization method that estimates unobservable gradients that reduce potential risks in unseen domains using a largescale pretrained model. These estimated unobservable gradients allow the pretrained model to learn taskspecific knowledge further while preserving its generalization ability by relieving the gradient bias. Our experimental results show that our method outperforms baseline methods on DomainBed, a standard benchmark in domain generalization. We also provide extensive analyses to demonstrate that the pretrained model can learn taskspecific knowledge without sacrificing its generalization power.
LongForm Optimizing Instruction Tuning for Long Text Generation with Corpus Extraction ; Instruction tuning enables language models to generalize more effectively and better follow user intent. However, obtaining instruction data can be costly and challenging. Prior works employ methods such as expensive human annotation, crowdsourced datasets with alignment issues, or generating noisy examples via LLMs. We introduce the LongForm dataset, which is created by leveraging English corpus examples with augmented instructions. We select a diverse set of humanwritten documents from existing corpora such as C4 and Wikipedia and generate instructions for the given documents via LLMs. This approach provides a cheaper and cleaner instructiontuning dataset and one suitable for long text generation. We finetune T5, OPT, and LLaMA models on our dataset and show that even smaller LongForm models have good generalization capabilities for text generation. Our models outperform 10x larger language models without instruction tuning on various tasks such as storyrecipe generation and longform question answering. Moreover, LongForm models outperform prior instructiontuned models such as FLANT5 and Alpaca by a large margin. Finally, our models can effectively follow and answer multilingual instructions; we demonstrate this for news generation. We publicly release our data and models httpsgithub.comakoksalLongForm.
PLANNER Generating Diversified Paragraph via Latent Language Diffusion Model ; Autoregressive models for text sometimes generate repetitive and lowquality output because errors accumulate during the steps of generation. This issue is often attributed to exposure bias the difference between how a model is trained, and how it is used during inference. Denoising diffusion models provide an alternative approach in which a model can revisit and revise its output. However, they can be computationally expensive and prior efforts on text have led to models that produce less fluent output compared to autoregressive models, especially for longer text and paragraphs. In this paper, we propose PLANNER, a model that combines latent semantic diffusion with autoregressive generation, to generate fluent text while exercising global control over paragraphs. The model achieves this by combining an autoregressive decoding module with a planning module that uses latent diffusion to generate semantic paragraph embeddings in a coarsetofine manner. The proposed method is evaluated on various conditional generation tasks, and results on semantic generation, text completion and summarization show its effectiveness in generating highquality longform text in an efficient manner.
Multispan Style Extraction for Generative Reading Comprehension ; Generative machine reading comprehension MRC requires a model to generate wellformed answers. For this type of MRC, answer generation method is crucial to the model performance. However, generative models, which are supposed to be the right model for the task, in generally perform poorly. At the same time, singlespan extraction models have been proven effective for extractive MRC, where the answer is constrained to a single span in the passage. Nevertheless, they generally suffer from generating incomplete answers or introducing redundant words when applied to the generative MRC. Thus, we extend the singlespan extraction method to multispan, proposing a new framework which enables generative MRC to be smoothly solved as multispan extraction. Thorough experiments demonstrate that this novel approach can alleviate the dilemma between generative models and singlespan models and produce answers with betterformed syntax and semantics.
Regularising Inverse Problems with Generative Machine Learning Models ; Deep neural network approaches to inverse imaging problems have produced impressive results in the last few years. In this paper, we consider the use of generative models in a variational regularisation approach to inverse problems. The considered regularisers penalise images that are far from the range of a generative model that has learned to produce images similar to a training dataset. We name this family textitgenerative regularisers. The success of generative regularisers depends on the quality of the generative model and so we propose a set of desired criteria to assess generative models and guide future research. In our numerical experiments, we evaluate three common generative models, autoencoders, variational autoencoders and generative adversarial networks, against our desired criteria. We also test three different generative regularisers on the inverse problems of deblurring, deconvolution, and tomography. We show that restricting solutions of the inverse problem to lie exactly in the range of a generative model can give good results but that allowing small deviations from the range of the generator produces more consistent results.
Generalized Univariate Distributions and a New Asymmetric Laplace Model ; This work provides a survey of the general class of distributions generated from the mixture of the beta random variables. We provide an extensive review of the literature, concerning generating new distributions via the inverse CDF transformation. In particular, we accounted for beta generated and Kumaraswamy generated families of distributions. We provide a brief summary of each of their families of distributions. We also propose a new asymmetric mixture distribution, which is an alternative to beta generated distributions. We provide basic properties of this new class of distributions generated from the Laplace model. We also address the issue of parameter estimation of this new skew generalized Laplace model.
Code Generator Composition for ModelDriven Engineering of Robotics Component Connector Systems ; Engineering software for robotics applications requires multidomain and applicationspecific solutions. Modeldriven engineering and modeling language integration provide means for developing specialized, yet reusable models of robotics software architectures. Code generators transform these platform independent models into executable code specific to robotic platforms. Generative software engineering for multidomain applications requires not only the integration of modeling languages but also the integration of validation mechanisms and code generators. In this paper we sketch a conceptual model for code generator composition and show an instantiation of this model in the MontiArc Automaton framework. MontiArcAutomaton allows modeling software architectures as component and connector models with different component behavior modeling languages. Effective means for code generator integration are a necessity for the post hoc integration of applicationspecific languages in modelbased robotics software engineering.
On the goodnessoffit of generalized linear geostatistical models ; We propose a generalization of Zhang's coefficient of determination to generalized linear geostatistical models and illustrate its application to riverblindness mapping. The generalized coefficient of determination has a more intuitive interpretation than other measures of predictive performance and allows to assess the individual contribution of each explanatory variable and the random effects to spatial prediction. The developed methodology is also more widely applicable to any generalized linear mixed model.
Descriptive inner model theory ; A paper for general audience about descriptive inner model theory.
Cooperative Training of Descriptor and Generator Networks ; This paper studies the cooperative training of two generative models for image modeling and synthesis. Both models are parametrized by convolutional neural networks ConvNets. The first model is a deep energybased model, whose energy function is defined by a bottomup ConvNet, which maps the observed image to the energy. We call it the descriptor network. The second model is a generator network, which is a nonlinear version of factor analysis. It is defined by a topdown ConvNet, which maps the latent factors to the observed image. The maximum likelihood learning algorithms of both models involve MCMC sampling such as Langevin dynamics. We observe that the two learning algorithms can be seamlessly interwoven into a cooperative learning algorithm that can train both models simultaneously. Specifically, within each iteration of the cooperative learning algorithm, the generator model generates initial synthesized examples to initialize a finitestep MCMC that samples and trains the energybased descriptor model. After that, the generator model learns from how the MCMC changes its synthesized examples. That is, the descriptor model teaches the generator model by MCMC, so that the generator model accumulates the MCMC transitions and reproduces them by direct ancestral sampling. We call this scheme MCMC teaching. We show that the cooperative algorithm can learn highly realistic generative models.
A Model to Search for Synthesizable Molecules ; Deep generative models are able to suggest new organic molecules by generating strings, trees, and graphs representing their structure. While such models allow one to generate molecules with desirable properties, they give no guarantees that the molecules can actually be synthesized in practice. We propose a new molecule generation model, mirroring a more realistic realworld process, where a reactants are selected, and b combined to form more complex molecules. More specifically, our generative model proposes a bag of initial reactants selected from a pool of commerciallyavailable molecules and uses a reaction model to predict how they react together to generate new molecules. We first show that the model can generate diverse, valid and unique molecules due to the useful inductive biases of modeling reactions. Furthermore, our model allows chemists to interrogate not only the properties of the generated molecules but also the feasibility of the synthesis routes. We conclude by using our model to solve retrosynthesis problems, predicting a set of reactants that can produce a target product.
AraGPT2 PreTrained Transformer for Arabic Language Generation ; Recently, pretrained transformerbased architectures have proven to be very efficient at language modeling and understanding, given that they are trained on a large enough corpus. Applications in language generation for Arabic are still lagging in comparison to other NLP advances primarily due to the lack of advanced Arabic language generation models. In this paper, we develop the first advanced Arabic language generation model, AraGPT2, trained from scratch on a large Arabic corpus of internet text and news articles. Our largest model, AraGPT2mega, has 1.46 billion parameters, which makes it the largest Arabic language model available. The Mega model was evaluated and showed success on different tasks including synthetic news generation, and zeroshot question answering. For text generation, our best model achieves a perplexity of 29.8 on heldout Wikipedia articles. A study conducted with human evaluators showed the significant success of AraGPT2mega in generating news articles that are difficult to distinguish from articles written by humans. We thus develop and release an automatic discriminator model with a 98 percent accuracy in detecting modelgenerated text. The models are also publicly available, hoping to encourage new research directions and applications for Arabic NLP.
Pruning's Effect on Generalization Through the Lens of Training and Regularization ; Practitioners frequently observe that pruning improves model generalization. A longstanding hypothesis based on biasvariance tradeoff attributes this generalization improvement to model size reduction. However, recent studies on overparameterization characterize a new model size regime, in which larger models achieve better generalization. Pruning models in this overparameterized regime leads to a contradiction while theory predicts that reducing model size harms generalization, pruning to a range of sparsities nonetheless improves it. Motivated by this contradiction, we reexamine pruning's effect on generalization empirically. We show that size reduction cannot fully account for the generalizationimproving effect of standard pruning algorithms. Instead, we find that pruning leads to better training at specific sparsities, improving the training loss over the dense model. We find that pruning also leads to additional regularization at other sparsities, reducing the accuracy degradation due to noisy examples over the dense model. Pruning extends model training time and reduces model size. These two factors improve training and add regularization respectively. We empirically demonstrate that both factors are essential to fully explaining pruning's impact on generalization.
MetaCoTGAN A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation ; Training generative models that can generate highquality text with sufficient diversity is an important open problem for Natural Language Generation NLG community. Recently, generative adversarial models have been applied extensively on text generation tasks, where the adversarially trained generators alleviate the exposure bias experienced by conventional maximum likelihood approaches and result in promising generation quality. However, due to the notorious defect of mode collapse for adversarial training, the adversarially trained generators face a qualitydiversity tradeoff, i.e., the generator models tend to sacrifice generation diversity severely for increasing generation quality. In this paper, we propose a novel approach which aims to improve the performance of adversarial text generation via efficiently decelerating mode collapse of the adversarial training. To this end, we introduce a cooperative training paradigm, where a language model is cooperatively trained with the generator and we utilize the language model to efficiently shape the data distribution of the generator against mode collapse. Moreover, instead of engaging the cooperative update for the generator in a principled way, we formulate a meta learning mechanism, where the cooperative update to the generator serves as a high level meta task, with an intuition of ensuring the parameters of the generator after the adversarial update would stay resistant against mode collapse. In the experiment, we demonstrate our proposed approach can efficiently slow down the pace of mode collapse for the adversarial text generators. Overall, our proposed method is able to outperform the baseline approaches with significant margins in terms of both generation quality and diversity in the testified domains.
Automatic Conditional Generation of Personalized Social Media Short Texts ; Automatic text generation has received much attention owing to rapid development of deep neural networks. In general, text generation systems based on statistical language model will not consider anthropomorphic characteristics, which results in machinelike generated texts. To fill the gap, we propose a conditional language generation model with Big Five Personality BFP feature vectors as input context, which writes humanlike short texts. The short text generator consists of a layer of long short memory network LSTM, where a BFP feature vector is concatenated as one part of input for each cell. To enable supervised training generation model, a text classification model based convolution neural network CNN has been used to prepare BFPtagged Chinese microblog corpora. Validated by a BFP linguistic computational model, our generated Chinese short texts exhibit discriminative personality styles, which are also syntactically correct and semantically smooth with appropriate emoticons. With combination of natural language generation with psychological linguistics, our proposed BFPdependent text generation model can be widely used for individualization in machine translation, image caption, dialogue generation and so on.
Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization ; In a recent paper, we have presented a generative adversarial network GANbased model for unconditional generation of the melspectrograms of singing voices. As the generator of the model is designed to take a variablelength sequence of noise vectors as input, it can generate melspectrograms of variable length. However, our previous listening test shows that the quality of the generated audio leaves room for improvement. The present paper extends and expands that previous work in the following aspects. First, we employ a hierarchical architecture in the generator to induce some structure in the temporal dimension. Second, we introduce a cycle regularization mechanism to the generator to avoid mode collapse. Third, we evaluate the performance of the new model not only for generating singing voices, but also for generating speech voices. Evaluation result shows that new model outperforms the prior one both objectively and subjectively. We also employ the model to unconditionally generate sequences of piano and violin music and find the result promising. Audio examples, as well as the code for implementing our model, will be publicly available online upon paper publication.
PHomGeM Persistent Homology for Generative Models ; Generative neural network models, including Generative Adversarial Network GAN and AutoEncoders AE, are among the most popular neural network models to generate adversarial data. The GAN model is composed of a generator that produces synthetic data and of a discriminator that discriminates between the generator's output and the true data. AE consist of an encoder which maps the model distribution to a latent manifold and of a decoder which maps the latent manifold to a reconstructed distribution. However, generative models are known to provoke chaotically scattered reconstructed distribution during their training, and consequently, incomplete generated adversarial distributions. Current distance measures fail to address this problem because they are not able to acknowledge the shape of the data manifold, i.e. its topological features, and the scale at which the manifold should be analyzed. We propose Persistent Homology for Generative Models, PHomGeM, a new methodology to assess and measure the distribution of a generative model. PHomGeM minimizes an objective function between the true and the reconstructed distributions and uses persistent homology, the study of the topological features of a space at different spatial resolutions, to compare the nature of the true and the generated distributions. Our experiments underline the potential of persistent homology for Wasserstein GAN in comparison to Wasserstein AE and Variational AE. The experiments are conducted on a realworld data set particularly challenging for traditional distance measures and generative neural network models. PHomGeM is the first methodology to propose a topological distance measure, the bottleneck distance, for generative models used to compare adversarial samples in the context of credit card transactions.
FlowSeq NonAutoregressive Conditional Sequence Generation with Generative Flow ; Most sequencetosequence seq2seq models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, nonautoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, directly modeling the joint distribution of all tokens simultaneously is challenging, and even with increasingly complex model structures accuracy lags significantly behind autoregressive models. In this paper, we propose a simple, efficient, and effective model for nonautoregressive sequence generation using latent variable models. Specifically, we turn to generative flow, an elegant technique to model complex distributions using neural networks, and design several layers of flow tailored for modeling the conditional density of sequential latent variables. We evaluate this model on three neural machine translation NMT benchmark datasets, achieving comparable performance with stateoftheart nonautoregressive NMT models and almost constant decoding time w.r.t the sequence length.
Adapting a Language Model for Controlled Affective Text Generation ; Human use language not just to convey information but also to express their inner feelings and mental states. In this work, we adapt the stateoftheart language generation models to generate affective emotional text. We posit a model capable of generating affectdriven and topicfocused sentences without losing grammatical correctness as the affect intensity increases. We propose to incorporate emotion as prior for the probabilistic stateoftheart text generation model such as GPT2. The model gives a user the flexibility to control the category and intensity of emotion as well as the topic of the generated text. Previous attempts at modelling finegrained emotions fall out on grammatical correctness at extreme intensities, but our model is resilient to this and delivers robust results at all intensities. We conduct automated evaluations and human studies to test the performance of our model and provide a detailed comparison of the results with other models. In all evaluations, our model outperforms existing affective text generation models.
Planning with Logical Graphbased Language Model for Instruction Generation ; Despite the superior performance of large language models to generate natural language texts, it is hard to generate texts with correct logic according to a given task, due to the difficulties for neural models to capture implied rules from freeform texts. In this paper, we propose a novel graphbased language model, LogicalGLM, to infuse logic into language models for more valid text generation and interpretability. Specifically, we first capture information from natural language instructions and construct logical bayes graphs that generally describe domains. Next, we generate logical skeletons to guide language model training, infusing domain knowledge into language models. Finally, we alternately optimize the searching policy of graphs and language models until convergence. The experimental results show that LogicalGLM is both effective and efficient compared with traditional language models, despite using smallerscale training data and fewer parameters. Our approach can generate instructional texts with more correct logic owing to the internalized domain knowledge. Moreover, the usage of logical graphs reflects the inner mechanism of the language models, which improves the interpretability of blackbox models.
Modeling Graphs Using a Mixture of Kronecker Models ; Generative models for graphs are increasingly becoming a popular tool for researchers to generate realistic approximations of graphs. While in the past, focus was on generating graphs which follow general laws, such as the power law for degree distribution, current models have the ability to learn from observed graphs and generate synthetic approximations. The primary emphasis of existing models has been to closely match different properties of a single observed graph. Such models, though stochastic, tend to generate samples which do not have significant variance in terms of the various graph properties. We argue that in many cases real graphs are sampled drawn from a graph population e.g., networks sampled at various time points, social networks for individual schools, healthcare networks for different geographic regions, etc.. Such populations typically exhibit significant variance. However, existing models are not designed to model this variance, which could lead to issues such as overfitting. We propose a graph generative model that focuses on matching the properties of real graphs and the natural variance expected for the corresponding population. The proposed model adopts a mixturemodel strategy to expand the expressiveness of Kronecker product based graph models KPGM, while building upon the two strengths of KPGM, viz., ability to model several key properties of graphs and to scale to massive graph sizes using its elegant fractal growth based formulation. The proposed model, called xKronecker Product Graph Model, or xKPGM, allows scalable learning from observed graphs and generates samples that match the mean and variance of several salient graph properties. We experimentally demonstrate the capability of the proposed model to capture the inherent variability in real world graphs on a variety of publicly available graph data sets.
Models of representations and Langlands functoriality ; In this article we explore the interplay between two generalizations of the Whittaker model, namely the Klyachko models and the degenerate Whittaker models, and two functorial constructions, namely base change and automorphic induction, for the class of unitarizable and ladder representations of the general linear groups.
Dynamical analysis of a generalized hepatitis B epidemic model and its dynamically consistent discrete model ; The aim of this work is to study qualitative dynamical properties of a generalized hepatitis B epidemic model and its dynamically consistent discrete model.
Datafree Blackbox Attack based on Diffusion Model ; Since the training data for the target model in a datafree blackbox attack is not available, most recent schemes utilize GANs to generate data for training substitute model. However, these GANsbased schemes suffer from low training efficiency as the generator needs to be retrained for each target model during the substitute training process, as well as low generation quality. To overcome these limitations, we consider utilizing the diffusion model to generate data, and propose a datafree blackbox attack scheme based on diffusion model to improve the efficiency and accuracy of substitute training. Despite the data generated by the diffusion model exhibits high quality, it presents diverse domain distributions and contains many samples that do not meet the discriminative criteria of the target model. To further facilitate the diffusion model to generate data suitable for the target model, we propose a Latent Code Augmentation LCA method to guide the diffusion model in generating data. With the guidance of LCA, the data generated by the diffusion model not only meets the discriminative criteria of the target model but also exhibits high diversity. By utilizing this data, it is possible to train substitute model that closely resemble the target model more efficiently. Extensive experiments demonstrate that our LCA achieves higher attack success rates and requires fewer query budgets compared to GANsbased schemes for different target models.
A Stochastic Grammar for Natural Shapes ; We consider object detection using a generic model for natural shapes. A common approach for object recognition involves matching object models directly to images. Another approach involves building intermediate representations via a generic grouping processes. We argue that these two processes modelbased recognition and grouping may use similar computational mechanisms. By defining a generic model for shapes we can use modelbased techniques to implement a midlevel vision grouping process.
Generalized immediate exchange models and their symmetries ; We reconsider the immediate exchange model and define a more general class of models where mass is split, exchanged and merged. We relate the splitting process to the symmetric inclusion process via thermalization and from that obtain symmetries and selfduality of the generalized IEM model. We show that analogous properties hold for models were the splitting is related to the symmetric exclusion process or to independent random walkers.
Generative Adversarial Networks for Model Order Reduction in Seismic FullWaveform Inversion ; I train a Generative Adversarial Network to produce realistic seismic wave speed models. I integrate the generator network into seismic FullWaveform Inversion to reduce the number of model parameters and restrict the inverted models to only those that are plausible. Applying the method to a 2D section of the SEAM model, I demonstrate that it can produce more plausible results than conventional FullWaveform Inversion.
Adversarial Training Improves Joint EnergyBased Generative Modelling ; We propose the novel framework for generative modelling using hybrid energybased models. In our method we combine the interpretable input gradients of the robust classifier and Langevin Dynamics for sampling. Using the adversarial training we improve not only the training stability, but robustness and generative modelling of the joint energybased models.
Replicating Active Appearance Model by Generator Network ; A recent Cell paper Chang and Tsao, 2017 reports an interesting discovery. For the face stimuli generated by a pretrained active appearance model AAM, the responses of neurons in the areas of the primate brain that are responsible for face recognition exhibit strong linear relationship with the shape variables and appearance variables of the AAM that generates the face stimuli. In this paper, we show that this behavior can be replicated by a deep generative model called the generator network, which assumes that the observed signals are generated by latent random variables via a topdown convolutional neural network. Specifically, we learn the generator network from the face images generated by a pretrained AAM model using variational autoencoder, and we show that the inferred latent variables of the learned generator network have strong linear relationship with the shape and appearance variables of the AAM model that generates the face images. Unlike the AAM model that has an explicit shape model where the shape variables generate the control points or landmarks, the generator network has no such shape model and shape variables. Yet the generator network can learn the shape knowledge in the sense that some of the latent variables of the learned generator network capture the shape variations in the face images generated by AAM.
DIRE for DiffusionGenerated Image Detection ; Diffusion models have shown remarkable success in visual synthesis, but have also raised concerns about potential abuse for malicious purposes. In this paper, we seek to build a detector for telling apart real images from diffusiongenerated images. We find that existing detectors struggle to detect images generated by diffusion models, even if we include generated images from a specific diffusion model in their training data. To address this issue, we propose a novel image representation called DIffusion Reconstruction Error DIRE, which measures the error between an input image and its reconstruction counterpart by a pretrained diffusion model. We observe that diffusiongenerated images can be approximately reconstructed by a diffusion model while real images cannot. It provides a hint that DIRE can serve as a bridge to distinguish generated and real images. DIRE provides an effective way to detect images generated by most diffusion models, and it is general for detecting generated images from unseen diffusion models and robust to various perturbations. Furthermore, we establish a comprehensive diffusiongenerated benchmark including images generated by eight diffusion models to evaluate the performance of diffusiongenerated image detectors. Extensive experiments on our collected benchmark demonstrate that DIRE exhibits superiority over previous generatedimage detectors. The code and dataset are available at httpsgithub.comZhendongWang6DIRE.
Synthetic Dataset Generation with ItemsetBased Generative Models ; This paper proposes three different data generators, tailored to transactional datasets, based on existing itemsetbased generative models. All these generators are intuitive and easy to implement and show satisfactory performance. The quality of each generator is assessed by means of three different methods that capture how well the original dataset structure is preserved.