text
stringlengths
62
2.94k
GraphAware Transformer Is Attention All Graphs Need ; Graphs are the natural data structure to represent relational and structural information in many domains. To cover the broad range of graphdata applications including graph classification as well as graph generation, it is desirable to have a general and flexible model consisting of an encoder and a decoder that can handle graph data. Although the representative encoderdecoder model, Transformer, shows superior performance in various tasks especially of natural language processing, it is not immediately available for graphs due to their nonsequential characteristics. To tackle this incompatibility, we propose GRaphAware Transformer GRAT, the first Transformerbased model which can encode and decode whole graphs in endtoend fashion. GRAT is featured with a selfattention mechanism adaptive to the edge information and an autoregressive decoding mechanism based on the twopath approach consisting of subgraph encoding path and nodeandedge generation path for each decoding step. We empirically evaluated GRAT on multiple setups including encoderbased tasks such as molecule property predictions on QM9 datasets and encoderdecoderbased tasks such as molecule graph generation in the organic molecule synthesis domain. GRAT has shown very promising results including stateoftheart performance on 4 regression tasks in QM9 benchmark.
Heuristic SemiSupervised Learning for Graph Generation Inspired by Electoral College ; Recently, graphbased algorithms have drawn much attention because of their impressive success in semisupervised setups. For better model performance, previous studies learn to transform the topology of the input graph. However, these works only focus on optimizing the original nodes and edges, leaving the direction of augmenting existing data unexplored. In this paper, by simulating the generation process of graph signals, we propose a novel heuristic preprocessing technique, namely ELectoral COllege ELCO, which automatically expands new nodes and edges to refine the label similarity within a dense subgraph. Substantially enlarging the original training set with highquality generated labeled data, our framework can effectively benefit downstream models. To justify the generality and practicality of ELCO, we couple it with the popular Graph Convolution Network and Graph Attention Network to perform extensive evaluations on three standard datasets. In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the stateoftheart. We release our code and data on httpsgithub.comRingBDStackELCO to guarantee reproducibility.
Structural Patterns and Generative Models of Realworld Hypergraphs ; Graphs have been utilized as a powerful tool to model pairwise relationships between people or objects. Such structure is a special type of a broader concept referred to as hypergraph, in which each hyperedge may consist of an arbitrary number of nodes, rather than just two. A large number of realworld datasets are of this form for example, list of recipients of emails sent from an organization, users participating in a discussion thread or subject labels tagged in an online question. However, due to complex representations and lack of adequate tools, little attention has been paid to exploring the underlying patterns in these interactions. In this work, we empirically study a number of realworld hypergraph datasets across various domains. In order to enable thorough investigations, we introduce the multilevel decomposition method, which represents each hypergraph by a set of pairwise graphs. Each pairwise graph, which we refer to as a klevel decomposed graph, captures the interactions between pairs of subsets of k nodes. We empirically find that at each decomposition level, the investigated hypergraphs obey five structural properties. These properties serve as criteria for evaluating how realistic a hypergraph is, and establish a foundation for the hypergraph generation problem. We also propose a hypergraph generator that is remarkably simple but capable of fulfilling these evaluation metrics, which are hardly achieved by other baseline generator models.
Domain Generalization using Causal Matching ; In the domain generalization literature, a common objective is to learn representations independent of the domain after conditioning on the class label. We show that this objective is not sufficient there exist counterexamples where a model fails to generalize to unseen domains even after satisfying classconditional domain invariance. We formalize this observation through a structural causal model and show the importance of modeling withinclass variations for generalization. Specifically, classes contain objects that characterize specific causal features, and domains can be interpreted as interventions on these objects that change noncausal features. We highlight an alternative condition inputs across domains should have the same representation if they are derived from the same object. Based on this objective, we propose matchingbased algorithms when base objects are observed e.g., through data augmentation and approximate the objective when objects are not observed MatchDG. Our simple matchingbased algorithms are competitive to prior work on outofdomain accuracy for rotated MNIST, FashionMNIST, PACS, and ChestXray datasets. Our method MatchDG also recovers groundtruth object matches on MNIST and FashionMNIST, top10 matches from MatchDG have over 50 overlap with groundtruth matches.
Automated Radiological Report Generation For Chest XRays With WeaklySupervised EndtoEnd Deep Learning ; The chest XRay CXR is the one of the most common clinical exam used to diagnose thoracic diseases and abnormalities. The volume of CXR scans generated daily in hospitals is huge. Therefore, an automated diagnosis system able to save the effort of doctors is of great value. At present, the applications of artificial intelligence in CXR diagnosis usually use pattern recognition to classify the scans. However, such methods rely on labeled databases, which are costly and usually have large error rates. In this work, we built a database containing more than 12,000 CXR scans and radiological reports, and developed a model based on deep convolutional neural network and recurrent network with attention mechanism. The model learns features from the CXR scans and the associated raw radiological reports directly; no additional labeling of the scans are needed. The model provides automated recognition of given scans and generation of reports. The quality of the generated reports was evaluated with both the CIDEr scores and by radiologists as well. The CIDEr scores are found to be around 5.8 on average for the testing dataset. Further blind evaluation suggested a comparable performance against human radiologist.
Learning to Generate Noise for MultiAttack Robustness ; Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations. However, the majority of existing defense methods are tailored to defend against a single category of adversarial perturbation e.g. ellinftyattack. In safetycritical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system. Moreover, training on multiple perturbations simultaneously significantly increases the computational overhead during training. To address these challenges, we propose a novel metalearning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks. Its key component is Meta Noise Generator MNG that outputs optimal noise to stochastically perturb a given sample, such that it helps lower the error on diverse adversarial perturbations. By utilizing samples generated by MNG, we train a model by enforcing the label consistency across multiple perturbations. We validate the robustness of models trained by our scheme on various datasets and against a wide variety of perturbations, demonstrating that it significantly outperforms the baselines across multiple perturbations with a marginal computational cost.
High Resolution ZeroShot Domain Adaptation of Synthetically Rendered Face Images ; Generating photorealistic images of human faces at scale remains a prohibitively difficult task using computer graphics approaches. This is because these require the simulation of light to be photorealistic, which in turn requires physically accurate modelling of geometry, materials, and light sources, for both the head and the surrounding scene. Nonphotorealistic renders however are increasingly easy to produce. In contrast to computer graphics approaches, generative models learned from more readily available 2D image data have been shown to produce samples of human faces that are hard to distinguish from real data. The process of learning usually corresponds to a loss of control over the shape and appearance of the generated images. For instance, even simple disentangling tasks such as modifying the hair independently of the face, which is trivial to accomplish in a computer graphics approach, remains an open research question. In this work, we propose an algorithm that matches a nonphotorealistic, synthetically generated image to a latent vector of a pretrained StyleGAN2 model which, in turn, maps the vector to a photorealistic image of a person of the same pose, expression, hair, and lighting. In contrast to most previous work, we require no synthetic training data. To the best of our knowledge, this is the first algorithm of its kind to work at a resolution of 1K and represents a significant leap forward in visual realism.
GRAF Generative Radiance Fields for 3DAware Image Synthesis ; While 2D generative adversarial networks have enabled highresolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxelbased representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, e.g., the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxelbased representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multiscale patchbased discriminator, we demonstrate synthesis of highresolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and realworld datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.
An Advanced Approach for Choosing Security Patterns and Checking their Implementation ; This paper tackles the problems of generating concrete test cases for testing whether an application is vulnerable to attacks, and of checking whether security solutions are correctly implemented. The approach proposed in the paper aims at guiding developers towards the implementation of secure applications, from the threat modelling stage up to the testing one. This approach relies on a knowledge base integrating varied security data, e.g., attacks, attack steps, and security patterns that are generic and reusable solutions to design secure applications. The first stage of the approach consists in assisting developers in the design of Attack Defense Trees expressing the attacker possibilities to compromise an application and the defenses that may be implemented. These defenses are given under the form of security pattern combinations. In the second stage, these trees are used to guide developers in the test case generation. After the test case execution, test verdicts show whether an application is vulnerable to the threats modelled by an ADTree. The last stage of the approach checks whether behavioural properties of security patterns hold in the application traces collected while the test case execution. These properties are formalised with LTL properties, which are generated from the knowledge base. Developers do not have to write LTL properties not to be expert in formal models. We experimented the approach on 10 Web applications to evaluate its testing effectiveness and its performance.
StyPath StyleTransfer Data Augmentation For Robust Histology Image Classification ; The classification of Antibody Mediated Rejection AMR in kidney transplant remains challenging even for experienced nephropathologists; this is partly because histological tissue stain analysis is often characterized by low interobserver agreement and poor reproducibility. One of the implicated causes for interobserver disagreement is the variability of tissue stain quality between and within pathology labs, coupled with the gradual fading of archival sections. Variations in stain colors and intensities can make tissue evaluation difficult for pathologists, ultimately affecting their ability to describe relevant morphological features. Being able to accurately predict the AMR status based on kidney histology images is crucial for improving patient treatment and care. We propose a novel pipeline to build robust deep neural networks for AMR classification based on StyPath, a histological data augmentation technique that leverages a light weight styletransfer algorithm as a means to reduce samplespecific bias. Each image was generated in 1.84 0.03 seconds using a single GTX TITAN V gpu and pytorch, making it faster than other popular histological data augmentation techniques. We evaluated our model using a Monte Carlo MC estimate of Bayesian performance and generate an epistemic measure of uncertainty to compare both the baseline and StyPath augmented models. We also generated GradCAM representations of the results which were assessed by an experienced nephropathologist; we used this qualitative analysis to elucidate on the assumptions being made by each model. Our results imply that our styletransfer augmentation technique improves histological classification performance reducing error from 14.8 to 11.5 and generalization ability.
Deep Transformer based Data Augmentation with Subword Units for Morphologically Rich Online ASR ; Recently Deep Transformer models have proven to be particularly powerful in language modeling tasks for ASR. Their high complexity, however, makes them very difficult to apply in the first single pass of an online system. Recent studies showed that a considerable part of the knowledge of neural network Language Models LM can be transferred to traditional ngrams by using neural text generation based data augmentation. In our paper, we pretrain a GPT2 Transformer LM on a general text corpus and finetune it on our Hungarian conversational call center ASR task. We show that although data augmentation with Transformergenerated text works well for isolating languages, it causes a vocabulary explosion in a morphologically rich language. Therefore, we propose a new method called subwordbased neural text augmentation, where we retokenize the generated text into statistically derived subwords. We compare Morfessor and BPE statistical subword tokenizers and show that both methods can significantly improve the WER while greatly reducing vocabulary size and memory requirements. Finally, we also demonstrate that subwordbased neural text augmentation outperforms the wordbased approach not only in terms of overall WER but also in recognition of OOV words.
AENet Autonomous Evolution Image Fusion Method Inspired by Human Cognitive Mechanism ; In order to solve the robustness and generality problems of the image fusion task,inspired by the human brain cognitive mechanism, we propose a robust and general image fusion method with autonomous evolution ability, and is therefore denoted with AENet. Through the collaborative optimization of multiple image fusion methods to simulate the cognitive process of human brain, unsupervised learning image fusion task can be transformed into semisupervised image fusion task or supervised image fusion task, thus promoting the evolutionary ability of network model weight. Firstly, the relationship between human brain cognitive mechanism and image fusion task is analyzed and a physical model is established to simulate human brain cognitive mechanism. Secondly, we analyze existing image fusion methods and image fusion loss functions, select the image fusion method with complementary features to construct the algorithm module, establish the multiloss joint evaluation function to obtain the optimal solution of algorithm module. The optimal solution of each image is used to guide the weight training of network model. Our image fusion method can effectively unify the crossmodal image fusion task and the same modal image fusion task, and effectively overcome the difference of data distribution between different datasets. Finally, extensive numerical results verify the effectiveness and superiority of our method on a variety of image fusion datasets, including multifocus dataset, infrared and visible dataset, medical image dataset and multiexposure dataset. Comprehensive experiments demonstrate the superiority of our image fusion method in robustness and generality. In addition, experimental results also demonstate the effectiveness of human brain cognitive mechanism to improve the robustness and generality of image fusion.
HideandSeek Privacy Challenge ; The clinical timeseries setting poses a unique combination of challenges to data modeling and sharing. Due to the high dimensionality of clinical time series, adequate deidentification to preserve privacy while retaining data utility is difficult to achieve using common deidentification techniques. An innovative approach to this problem is synthetic data generation. From a technical perspective, a good generative model for timeseries data should preserve temporal dynamics, in the sense that new sequences respect the original relationships between highdimensional variables across time. From the privacy perspective, the model should prevent patient reidentification by limiting vulnerability to membership inference attacks. The NeurIPS 2020 HideandSeek Privacy Challenge is a novel twotracked competition to simultaneously accelerate progress in tackling both problems. In our headtohead format, participants in the synthetic data generation track i.e. hiders and the patient reidentification track i.e. seekers are directly pitted against each other by way of a new, highquality intensive care timeseries dataset the AmsterdamUMCdb dataset. Ultimately, we seek to advance generative techniques for dense and highdimensional temporal data streams that are 1 clinically meaningful in terms of fidelity and predictivity, as well as 2 capable of minimizing membership privacy risks in terms of the concrete notion of patient reidentification.
VAEBM A Symbiosis between Variational Autoencoders and Energybased Models ; Energybased models EBMs have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo MCMC iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders VAEs generate samples quickly and are equipped with a latent space that enables fast traversal of the data manifold. However, VAEs tend to assign high probability density to regions in data space outside the actual data distribution and often fail at generating sharp images. In this paper, we propose VAEBM, a symbiotic composition of a VAE and an EBM that offers the best of both worlds. VAEBM captures the overall mode structure of the data distribution using a stateoftheart VAE and it relies on its EBM component to explicitly exclude nondatalike regions from the model and refine the image samples. Moreover, the VAE component in VAEBM allows us to speed up MCMC updates by reparameterizing them in the VAE's latent space. Our experimental results show that VAEBM outperforms stateoftheart VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. It can generate highquality images as large as 256times256 pixels with short MCMC chains. We also demonstrate that VAEBM provides complete mode coverage and performs well in outofdistribution detection. The source code is available at httpsgithub.comNVlabsVAEBM
A Generative Machine Learning Approach to Policy Optimization in PursuitEvasion Games ; We consider a pursuitevasion game 11 played between two agents, 'Blue' the pursuer and 'Red' the evader, over T time steps. Red aims to attack Blue's territory. Blue's objective is to intercept Red by time T and thereby limit the success of Red's attack. Blue must plan its pursuit trajectory by choosing parameters that determine its course of movement speed and angle in our setup such that it intercepts Red by time T. We show that Blue's pathplanning problem in pursuing Red, can be posed as a sequential decision making problem under uncertainty. Blue's unawareness of Red's action policy renders the analytic dynamic programming approach intractable for finding the optimal action policy for Blue. In this work, we are interested in exploring datadriven approaches to the policy optimization problem that Blue faces. We apply generative machine learning ML approaches to learn optimal action policies for Blue. This highlights the ability of generative ML model to learn the relevant implicit representations for the dynamics of simulated pursuitevasion games. We demonstrate the effectiveness of our modeling approach via extensive statistical assessments. This work can be viewed as a preliminary step towards further adoption of generative modeling approaches for addressing policy optimization problems that arise in the context of multiagent learning and planning 1.
A Novel Actor DualCritic Model for Remote Sensing Image Captioning ; We deal with the problem of generating textual captions from optical remote sensing RS images using the notion of deep reinforcement learning. Due to the high interclass similarity in reference sentences describing remote sensing data, jointly encoding the sentences and images encourages prediction of captions that are semantically more precise than the ground truth in many cases. To this end, we introduce an Actor DualCritic training strategy where a second critic model is deployed in the form of an encoderdecoder RNN to encode the latent information corresponding to the original and generated captions. While all actorcritic methods use an actor to predict sentences for an image and a critic to provide rewards, our proposed encoderdecoder RNN guarantees highlevel comprehension of images by sentencetoimage translation. We observe that the proposed model generates sentences on the test data highly similar to the ground truth and is successful in generating even better captions in many critical cases. Extensive experiments on the benchmark Remote Sensing Image Captioning Dataset RSICD and the UCMcaptions dataset confirm the superiority of the proposed approach in comparison to the previous stateoftheart where we obtain a gain of sharp increments in both the ROUGEL and CIDEr measures.
Knowledgeenriched, Typeconstrained and Grammarguided Question Generation over Knowledge Bases ; Question generation over knowledge bases KBQG aims at generating naturallanguage questions about a subgraph, i.e. a set of connected triples. Two main challenges still face the current crop of encoderdecoderbased methods, especially on small subgraphs 1 low diversity and poor fluency due to the limited information contained in the subgraphs, and 2 semantic drift due to the decoder's oblivion of the semantics of the answer entity. We propose an innovative knowledgeenriched, typeconstrained and grammarguided KBQG model, named KTG, to addresses the above challenges. In our model, the encoder is equipped with auxiliary information from the KB, and the decoder is constrained with word types during QG. Specifically, entity domain and description, as well as relation hierarchy information are considered to construct question contexts, while a conditional copy mechanism is incorporated to modulate question semantics according to current word types. Besides, a novel reward function featuring grammatical similarity is designed to improve both generative richness and syntactic correctness via reinforcement learning. Extensive experiments show that our proposed model outperforms existing methods by a significant margin on two widelyused benchmark datasets SimpleQuestion and PathQuestion.
Generic FieldDriven Phenomena in Kitaev Spin Liquids Canted Magnetism and Proximate Spin Liquid Physics ; Topological spin liquids in two spatial dimensions are stable phases in the presence of a small magnetic field, but may give way to fieldinduced phenomena at intermediate field strengths. Sandwiched between the lowfield spin liquid physics and the highfield spinpolarized phase, the exploration of magnetic phenomena in this intermediate regime however often remains elusive to controlled analytical approaches. Here we numerically study such intermediatefield magnetic phenomena for two representative Kitaev models on the squareoctagon and decorated honeycomb lattice that exhibit either Abelian or nonAbelian topological order in the lowfield limit. Using a combination of exact diagonalization and density matrix renormalization group techniques, as well as linear spinwave theory, we establish the generic features of Kitaev spin liquids in an external magnetic field. While ferromagnetic models typically exhibit a direct transition to the polarized state at a relatively low field strength, antiferromagnetic couplings not only substantially stabilizes the topological spin liquid phase, but generically lead to the emergence of a distinct fieldinduced intermediate regime, separated by a crossover from the highfield polarized regime. Our results suggest that, for most lattice geometries, this regime generically exhibits significant spin canting, antiferromagnetic spinspin correlations, and an extended proximate spin liquid regime at finite temperatures. Notably, we identify a symmetry obstruction in the original honeycomb Kitaev model that prevents, at least for certain field directions, the formation of such canted magnetism without breaking symmetries consistent with the recent numerical observation of an extended gapless spin liquid in this case.
The Topology of General Cosmological Models ; Is the universe finite or infinite, and what shape does it have These fundamental questions, of which relatively little is known, are typically studied within the context of the standard model of cosmology where the universe is assumed to be homogeneous and isotropic. Here we address the above questions in highly general cosmological models, with the only assumption being that the average flow of matter is irrotational. Using techniques from differential geometry, specifically extensions of the BonnetMyers theorem, we derive a condition which implies a finite universe and yields a bound for its diameter. Furthermore, under a weaker condition involving the interplay between curvature and diameter, together with the assumption that the universe is finite i.e., has closed spatial slices, we provide a concise list of possible topologies. Namely, the spatial sections then would be either the ring topologies S1 times S2, S1tildetimesS2, S1timesmathbbRP2, mathbbRP3 mathbbRP3, or covered by the sphere S3 or torus T3. In particular, under this condition the basic construction of connected sums would be ruled out save for one, along with the plethora of topologies associated with negative curvature. These results are obtained from consequences of the geometrization of 3manifolds, by applying a generalization of the almost splitting theorem together with a curvature formula of Ehlers and Ellis.
SemanticGuided Inpainting Network for Complex Urban Scenes Manipulation ; Manipulating images of complex scenes to reconstruct, insert andor remove specific object instances is a challenging task. Complex scenes contain multiple semantics and objects, which are frequently cluttered or ambiguous, thus hampering the performance of inpainting models. Conventional techniques often rely on structural information such as object contours in multistage approaches that generate unreliable results and boundaries. In this work, we propose a novel deep learning model to alter a complex urban scene by removing a userspecified portion of the image and coherently inserting a new object e.g. car or pedestrian in that scene. Inspired by recent works on image inpainting, our proposed method leverages the semantic segmentation to model the content and structure of the image, and learn the best shape and location of the object to insert. To generate reliable results, we design a new decoder block that combines the semantic segmentation and generation task to guide better the generation of new objects and scenes, which have to be semantically consistent with the image. Our experiments, conducted on two largescale datasets of urban scenes Cityscapes and Indian Driving, show that our proposed approach successfully address the problem of semanticallyguided inpainting of complex urban scene.
Multiple Sclerosis Severity Classification From Clinical Text ; Multiple Sclerosis MS is a chronic, inflammatory and degenerative neurological disease, which is monitored by a specialist using the Expanded Disability Status Scale EDSS and recorded in unstructured text in the form of a neurology consult note. An EDSS measurement contains an overall EDSS score and several functional subscores. Typically, expert knowledge is required to interpret consult notes and generate these scores. Previous approaches used limited context length Word2Vec embeddings and keyword searches to predict scores given a consult note, but often failed when scores were not explicitly stated. In this work, we present MSBERT, the first publicly available transformer model trained on real clinical data other than MIMIC. Next, we present MSBC, a classifier that applies MSBERT to generate embeddings and predict EDSS and functional subscores. Lastly, we explore combining MSBC with other models through the use of Snorkel to generate scores for unlabelled consult notes. MSBC achieves stateoftheart performance on all metrics and prediction tasks and outperforms the models generated from the Snorkel ensemble. We improve MacroF1 by 0.12 to 0.88 for predicting EDSS and on average by 0.29 to 0.63 for predicting functional subscores over previous Word2Vec CNN and rulebased approaches.
Bifurcation of the neuronal population dynamics of the modified theta model transition to macroscopic gamma oscillation ; Interactions of inhibitory neurons produce gamma oscillations 3080 Hz in the local field potential, which is known to be involved in functions such as cognition and attention. In this study, the modified theta model is considered to investigate the theoretical relationship between the microscopic structure of inhibitory neurons and their gamma oscillations under a wide class of distribution functions of tonic currents on individual neurons. The stability and bifurcation of gamma oscillations for the Vlasov equation of the model is investigated by the generalized spectral theory. It is shown that as a connection probability of neurons increases, a pair of generalized eigenvalues crosses the imaginary axis twice, which implies that a stable gamma oscillation exists only when the connection probability has a value within a suitable range. On the other hand, when the distribution of tonic currents on individual neurons is the Lorentzian distribution, the Vlasov equation is reduced to a finite dimensional dynamical system. The bifurcation analyses of the reduced equation exhibit equivalent results with the generalized spectral theory. It is also demonstrated that the numerical computations of neuronal population follow the analyses of the generalized spectral theory as well as the bifurcation analysis of the reduced equation.
NonHermitian Yukawa interactions of fermions with axions potential microscopic origin and dynamical mass generation ; In this mini review, we discuss some recent developments regarding properties of quantum fieldtheory models containing antiHermitian Yukawa interactions between pseudoscalar fields axions and Dirac or Majorana fermions. Specifically, after motivating physically such interactions, in the context of stringinspired lowenergy effective field theories, involving righthanded neutrinos and axion fields, we proceed to discuss their formal consistency within the socalled ParityTimereversalPTsymmetry framework, as well as dynamical mass generation, induced by the Yukawa interactions, for both fermions and axions. The Yukawa couplings are assumed weak, given that they are conjectured to have been generated by nonperturbative effects in the underlying microscopic string theory. The models under discussion contain, in addition to the Yukawa terms, also antiHermitian anomalous derivative couplings of the pseudoscalar fields to axial fermion currents, as well as interactions of the fermions with nonHermitian axial backgrounds. We discuss the role of such additional couplings on the Yukawainduced dynamicallygenerated masses. For the case where the fermions are righthanded neutrinos, we compare such masses with the radiative ones induced by both, the antiHermitian anomalous terms and the antiHermitian Yukawa interactions in phenomenologically relevant models.
Learning Efficient GANs for Image Translation via Differentiable Masks and coAttention Distillation ; Generative Adversarial Networks GANs have been widelyused in image translation, but their high computation and storage costs impede the deployment on mobile devices. Prevalent methods for CNN compression cannot be directly applied to GANs due to the peculiarties of GAN tasks and the unstable adversarial training. To solve these, in this paper, we introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a coAttention Distillation. The former searches for a lightweight generator architecture in a trainingadaptive manner. To overcome channel inconsistency when pruning the residual connections, an adaptive crossblock group sparsity is further incorporated. The latter simultaneously distills informative attention maps from both the generator and discriminator of a pretrained model to the searched generator, effectively stabilizing the adversarial training of our lightweight model. Experiments show that DMAD can reduce the Multiply Accumulate Operations MACs of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model. Our code can be available at httpsgithub.comSJLeoDMAD.
BiISCA Bidirectional InterSentence Contextual Attention Mechanism for Detecting Sarcasm in User Generated Noisy Short Text ; Many online comments on social media platforms are hateful, humorous, or sarcastic. The sarcastic nature of these comments especially the short ones alters their actual implied sentiments, which leads to misinterpretations by the existing sentiment analysis models. A lot of research has already been done to detect sarcasm in the text using userbased, topical, and conversational information but not much work has been done to use intersentence contextual information for detecting the same. This paper proposes a new stateoftheart deep learning architecture that uses a novel Bidirectional InterSentence Contextual Attention mechanism BiISCA to capture intersentence dependencies for detecting sarcasm in the usergenerated short text using only the conversational context. The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm. BiISCA generates stateoftheart results on two widely used benchmark datasets for the sarcasm detection task Reddit and Twitter. To the best of our knowledge, none of the existing stateoftheart models use an intersentence contextual attention mechanism to detect sarcasm in the usergenerated short text using only conversational context.
Leveraging Regular Fundus Images for Training UWF Fundus Diagnosis Models via Adversarial Learning and PseudoLabeling ; Recently, ultrawidefield UWF 200degreefundus imaging by Optos cameras has gradually been introduced because of its broader insights for detecting more information on the fundus than regular 30 degree 60 degree fundus cameras. Compared with UWF fundus images, regular fundus images contain a large amount of highquality and wellannotated data. Due to the domain gap, models trained by regular fundus images to recognize UWF fundus images perform poorly. Hence, given that annotating medical data is labor intensive and time consuming, in this paper, we explore how to leverage regular fundus images to improve the limited UWF fundus data and annotations for more efficient training. We propose the use of a modified cycle generative adversarial network CycleGAN model to bridge the gap between regular and UWF fundus and generate additional UWF fundus images for training. A consistency regularization term is proposed in the loss of the GAN to improve and regulate the quality of the generated data. Our method does not require that images from the two domains be paired or even that the semantic labels be the same, which provides great convenience for data collection. Furthermore, we show that our method is robust to noise and errors introduced by the generated unlabeled data with the pseudolabeling technique. We evaluated the effectiveness of our methods on several common fundus diseases and tasks, such as diabetic retinopathy DR classification, lesion detection and tessellated fundus segmentation. The experimental results demonstrate that our proposed method simultaneously achieves superior generalizability of the learned representations and performance improvements in multiple tasks.
Domain Generalization for SessionIndependent BrainComputer Interface ; The interintrasubject variability of electroencephalography EEG makes the practical use of the braincomputer interface BCI difficult. In general, the BCI system requires a calibration procedure to acquire subjectsessionspecific data to tune the model every time the system is used. This problem is recognized as a major obstacle to BCI, and to overcome it, an approach based on domain generalization DG has recently emerged. The main purpose of this paper is to reconsider how the zerocalibration problem of BCI for a realistic situation can be overcome from the perspective of DG tasks. In terms of the realistic situation, we have focused on creating an EEG classification framework that can be applied directly in unseen sessions, using only multisubjectsession data acquired previously. Therefore, in this paper, we tested four deep learning models and four DG algorithms through leaveonesessionout validation. Our experiment showed that deeper and larger models were effective in crosssession generalization performance. Furthermore, we found that none of the explicit DG algorithms outperformed empirical risk minimization. Finally, by comparing the results of finetuning using subjectspecific data, we found that subjectspecific data may deteriorate unseen session classification performance due to intersession variability.
On the theory and applications of mechanism design and coalitional games in electricity markets ; Although the specific structures of electricity markets are diverse around the world, they were all conceived on the premise of predictable, controllable generation with nonnegligible marginal costs. Recent changes, specifically, the increasing renewable integration, have challenged such assumptions. In light of this shift, this thesis intends to devise new frameworks and advance our understanding of the future markets. The first part focuses on mechanism design when the model fully reflects the physics of the grid and the participants. We consider a market that involves continuous goods, general nonconvex constraints, and second stage costs. We then design the payments and conditions under which coalitions cannot influence the outcome. Under the incentivecompatible VCG mechanism, we prove that coalitionproof outcomes are achieved if bids are convex and constraints are polymatroids. By relaxing incentivecompatibility, we investigate coreselecting mechanisms that are coalitionproof without conditions. We show that they generalize the economic rationale of the LMP mechanism, and can approximate truthfulness without the pricetaking assumption. Finally, they are budgetbalanced. The second part coordinates regional markets to exploit the geographic diversification of renewables. In Europe, reserves remain an exclusive responsibility of regional operators. This limited coordination and the sequential structure hinder the utilization of generation and transmission. To promote reserve exchange, a preemptive model can optimally withdraw interarea transmission capacity from dayahead energy for reserves. This bilevel program however does not suggest costs that guarantee coordination. We formulate a new preemptive model that allows us to obtain stable benefits immune to deviations. Our proposal, leastcore benefits, achieves minimal stability violation with a tractable computation.
Learning Optimizationinspired Image Propagation with Control Mechanisms and Architecture Augmentations for Lowlevel Vision ; In recent years, building deep learning models from optimization perspectives has becoming a promising direction for solving lowlevel vision problems. The main idea of most existing approaches is to straightforwardly combine numerical iterations with manually designed network architectures to generate image propagations for specific kinds of optimization models. However, these heuristic learning models often lack mechanisms to control the propagation and rely on architecture engineering heavily. To mitigate the above issues, this paper proposes a unified optimizationinspired deep image propagation framework to aggregate Generative, Discriminative and Corrective GDC for short principles for a variety of lowlevel vision tasks. Specifically, we first formulate lowlevel vision tasks using a generic optimization objective and construct our fundamental propagative modules from three different viewpoints, i.e., the solution could be obtainedlearned 1 in generative manner; 2 based on discriminative metric, and 3 with domain knowledge correction. By designing control mechanisms to guide image propagations, we then obtain convergence guarantees of GDC for both fully and partiallydefined optimization formulations. Furthermore, we introduce two architecture augmentation strategies i.e., normalization and automatic search to respectively enhance the propagation stability and taskdataadaption ability. Extensive experiments on different lowlevel vision applications demonstrate the effectiveness and flexibility of GDC.
Visual Perception Generalization for VisionandLanguage Navigation via MetaLearning ; Visionandlanguage navigation VLN is a challenging task that requires an agent to navigate in realworld environments by understanding natural language instructions and visual information received in realtime. Prior works have implemented VLN tasks on continuous environments or physical robots, all of which use a fixed camera configuration due to the limitations of datasets, such as 1.5 meters height, 90 degrees horizontal field of view HFOV, etc. However, reallife robots with different purposes have multiple camera configurations, and the huge gap in visual information makes it difficult to directly transfer the learned navigation model between various robots. In this paper, we propose a visual perception generalization strategy based on metalearning, which enables the agent to fast adapt to a new camera configuration with a few shots. In the training phase, we first locate the generalization problem to the visual perception module, and then compare two metalearning algorithms for better generalization in seen and unseen environments. One of them uses the ModelAgnostic MetaLearning MAML algorithm that requires a few shot adaptation, and the other refers to a metricbased metalearning method with a featurewise affine transformation layer. The experiment results show that our strategy successfully adapts the learned navigation model to a new camera configuration, and the two algorithms show their advantages in seen and unseen environments respectively.
Binary Blackbox Evasion Attacks Against Deep Learningbased Static Malware Detectors with Adversarial ByteLevel Language Model ; Antimalware engines are the first line of defense against malicious software. While widely used, feature engineeringbased antimalware engines are vulnerable to unseen zeroday attacks. Recently, deep learningbased static antimalware detectors have achieved success in identifying unseen attacks without requiring feature engineering and dynamic analysis. However, these detectors are susceptible to malware variants with slight perturbations, known as adversarial examples. Generating effective adversarial examples is useful to reveal the vulnerabilities of such systems. Current methods for launching such attacks require accessing either the specifications of the targeted antimalware model, the confidence score of the antimalware response, or dynamic malware analysis, which are either unrealistic or expensive. We propose MalRNN, a novel deep learningbased approach to automatically generate evasive malware variants without any of these restrictions. Our approach features an adversarial example generation process, which learns a language model via a generative sequencetosequence recurrent neural network to augment malware binaries. MalRNN effectively evades three recent deep learningbased malware detectors and outperforms current benchmark methods. Findings from applying our MalRNN on a real dataset with eight malware categories are discussed.
Noharm calibration for generalized OaxacaBlinder estimators ; In randomized experiments, adjusting for observed features when estimating treatment effects has been proposed as a way to improve asymptotic efficiency. However, only linear regression has been proven to form an estimate of the average treatment effect that is asymptotically no less efficient than the treatedminuscontrol difference in means regardless of the true data generating process. Randomized treatment assignment provides this donoharm property, with neither truth of a linear model nor a generative model for the outcomes being required. We present a general calibration method which confers the same noharm property onto estimators leveraging a broad class of nonlinear models. This recovers the usual regressionadjusted estimator when ordinary least squares is used, and further provides noninferior treatment effect estimators using methods such as logistic and Poisson regression. The resulting estimators are noninferior to both the difference in means estimator and to treatment effect estimators that have not undergone calibration. We show that our estimator is asymptotically equivalent to an inverse probability weighted estimator using a logit link with predicted potential outcomes as covariates. In a simulation study, we demonstrate that common nonlinear estimators without our calibration procedure may perform markedly worse than both the calibrated estimator and the unadjusted difference in means.
Global Convergence of Model Function Based Bregman Proximal Minimization Algorithms ; Lipschitz continuity of the gradient mapping of a continuously differentiable function plays a crucial role in designing various optimization algorithms. However, many functions arising in practical applications such as low rank matrix factorization or deep neural network problems do not have a Lipschitz continuous gradient. This led to the development of a generalized notion known as the Lsmad property, which is based on generalized proximity measures called Bregman distances. However, the Lsmad property cannot handle nonsmooth functions, for example, simple nonsmooth functions like absx41 and also many practical composite problems are out of scope. We fix this issue by proposing the MAP property, which generalizes the Lsmad property and is also valid for a large class of nonconvex nonsmooth composite problems. Based on the proposed MAP property, we propose a globally convergent algorithm called Model BPG, that unifies several existing algorithms. The convergence analysis is based on a new Lyapunov function. We also numerically illustrate the superior performance of Model BPG on standard phase retrieval problems, robust phase retrieval problems, and Poisson linear inverse problems, when compared to a state of the art optimization method that is valid for generic nonconvex nonsmooth optimization problems.
DISCOS Bridging the Gap between Discourse Knowledge and Commonsense Knowledge ; Commonsense knowledge is crucial for artificial intelligence systems to understand natural language. Previous commonsense knowledge acquisition approaches typically rely on human annotations for example, ATOMIC or text generation models for example, COMET. Human annotation could provide highquality commonsense knowledge, yet its high cost often results in relatively small scale and low coverage. On the other hand, generation models have the potential to automatically generate more knowledge. Nonetheless, machine learning models often fit the training data well and thus struggle to generate highquality novel knowledge. To address the limitations of previous approaches, in this paper, we propose an alternative commonsense knowledge acquisition framework DISCOS from DIScourse to COmmonSense, which automatically populates expensive complex commonsense knowledge to more affordable linguistic knowledge resources. Experiments demonstrate that we can successfully convert discourse knowledge about eventualities from ASER, a largescale discourse knowledge graph, into ifthen commonsense knowledge defined in ATOMIC without any additional annotation effort. Further study suggests that DISCOS significantly outperforms previous supervised approaches in terms of novelty and diversity with comparable quality. In total, we can acquire 3.4M ATOMIClike inferential commonsense knowledge by populating ATOMIC on the core part of ASER. Codes and data are available at httpsgithub.comHKUSTKnowCompDISCOScommonsense.
High Order Asymptotic Expansions of a GoodBadUgly Wave Equation ; A heuristic method to find asymptotic solutions to a system of nonlinear wave equations near null infinity is proposed. The nonlinearities in this model, dubbed goodbadugly, are known to mimic the ones present in the Einstein field equations EFE and we expect to be able to exploit this method to derive an asymptotic expansion for the metric in General Relativity GR close to null infinity that goes beyond first order as performed by Lindblad and Rodnianski for the leading asymptotics. For the goodbadugly model, we derive formal expansions in which terms proportional to the logarithm of the radial coordinate appear at every order in the bad field, from the second order onward in the ugly field but never in the good field. The model is generalized to wave operators built from an asymptotically flat metric and it is shown that it admits polyhomogeneous asymptotic solutions. Finally we define stratified null forms, a generalization of standard null forms, which capture the behavior of different types of field, and demonstrate that the addition of such terms to the original system bears no qualitative influence on the type of asymptotic solutions found.
Adversarial TexttoImage Synthesis A Review ; With the advent of generative adversarial networks, synthesizing images from textual descriptions has recently become an active research area. It is a flexible and intuitive way for conditional image generation with significant progress in the last years regarding visual realism, diversity, and semantic alignment. However, the field still faces several challenges that require further research efforts such as enabling the generation of highresolution images with multiple objects, and developing suitable and reliable evaluation metrics that correlate with human judgement. In this review, we contextualize the state of the art of adversarial texttoimage synthesis models, their development since their inception five years ago, and propose a taxonomy based on the level of supervision. We critically examine current strategies to evaluate texttoimage synthesis models, highlight shortcomings, and identify new areas of research, ranging from the development of better datasets and evaluation metrics to possible improvements in architectural design and model training. This review complements previous surveys on generative adversarial networks with a focus on texttoimage synthesis which we believe will help researchers to further advance the field.
Hidden dualities in 1D quasiperiodic lattice models ; We find that quasiperiodicityinduced transitions between extended and localized phases in generic 1D systems are associated with hidden dualities that generalize the wellknown duality of the AubryAndr'e model. These spectral and eigenstate dualities are locally defined near the transition and can, in many cases, be explicitly constructed by considering relatively small commensurate approximants. The construction relies on auxiliary 2D Fermi surfaces obtained as functions of the phasetwisting boundary conditions and of the phaseshifting realspace structure. We show that, around the critical point of the limiting quasiperiodic system, the auxiliary Fermi surface of a highenoughorder approximant converges to a universal form. This allows us to devise a highlyaccurate method to obtain mobility edges and duality transformations for generic 1D quasiperiodic systems through their commensurate approximants. To illustrate the power of this approach, we consider several previously studied systems, including generalized AubryAndr'e models and coupled Moir'e chains. Our findings bring a new perspective to examine quasiperiodicityinduced extendedtolocalized transitions in 1D, provide a working criterion for the appearance of mobility edges, and an explicit way to understand the properties of eigenstates close to and at the transition.
Analysis of Convolutional Decoder for Image Caption Generation ; Recently Convolutional Neural Networks have been proposed for Sequence Modelling tasks such as Image Caption Generation. However, unlike Recurrent Neural Networks, the performance of Convolutional Neural Networks as Decoders for Image Caption Generation has not been extensively studied. In this work, we analyse various aspects of Convolutional Neural Network based Decoders such as Network complexity and depth, use of Data Augmentation, Attention mechanism, length of sentences used during training, etc on performance of the model. We perform experiments using Flickr8k and Flickr30k image captioning datasets and observe that unlike Recurrent Neural Network based Decoder, Convolutional Decoder for Image Captioning does not generally benefit from increase in network depth, in the form of stacked Convolutional Layers, and also the use of Data Augmentation techniques. In addition, use of Attention mechanism also provides limited performance gains with Convolutional Decoder. Furthermore, we observe that Convolutional Decoders show performance comparable with Recurrent Decoders only when trained using sentences of smaller length which contain up to 15 words but they have limitations when trained using higher sentence lengths which suggests that Convolutional Decoders may not be able to model longterm dependencies efficiently. In addition, the Convolutional Decoder usually performs poorly on CIDEr evaluation metric as compared to Recurrent Decoder.
Robust spin squeezing from the tower of states of U1symmetric spin Hamiltonians ; Spin squeezing a central resource for quantum metrology can be generated via the nonlinear, entangling evolution of an initially factorized spin state. Here we show that robust i.e. persistent squeezing dynamics is generated by a very large class of S12 spin Hamiltonians with axial symmetry, in relationship with the existence of a peculiar structure of the lowlying Hamiltonian eigenstates the socalled Anderson's tower of states. Such states are fundamentally related to the appearance of spontaneous symmetry breaking in quantum systems; and, for models with sufficiently high connectivity, they are parametrically close to the eigenstates of a planar rotor Dicke states, in that they feature an anomalously large value of the total angular momentum. Our central insight is that, starting from a coherent spin state, a generic U1symmetric Hamiltonian featuring the Anderson's tower of states generates the same squeezing evolution at short times as the one governed by the paradigmatic oneaxistwisting or planarrotor model of squeezing dynamics. The full squeezing evolution of the planarrotor model is seemingly reproduced for interactions decaying with distance r as ralpha when alpha 5d3 in d dimensions. Our results connect quantum simulation with quantum metrology by unveiling the squeezing power of a large variety of Hamiltonian dynamics that are currently implemented by different quantum simulation platforms.
MagnetoPlasmons in Grounded GrapheneBased Structures with Anisotropic Cover and Substrate ; This paper aims to study the magnetoplasmons in an anisotropic graphene nanowaveguide with bigyrotropic cover and substrate. The substrate is backed by a perfect electromagnetic conductor PEMC layer, a general and ideal boundary, which can be transformed easily into the perfect electric conductor PEC or the perfect magnetic conductor PMC boundaries. The upper and bottom layers of the graphene sheet are made of different magnetic materials. The external magnetic field is applied perpendicularly to the structure surface, which can be provided by a permanent magnet placed underneath the ground plane. Hence, the graphene sheet has anisotropic conductivity tensor. A novel analytical model has been proposed for the general nanowaveguide to find its propagation properties. As special cases of the proposed general structure, two important new waveguides have been introduced and studied to show, first the richness of the proposed general nanowaveguide regarding the related specific plasmonic wave phenomena and effects, and second the validity and the high accuracy of the proposed model. The analytical and the simulation results are in an excellent agreement. It is shown that the modal properties of the proposed structure can be tuned effectively via the external magnetic field and the chemical potential of the graphene. Harnessing the nonreciprocity effect of anisotropic materials and the graphene sheet, the presented analytical model can be exploited to design tunable innovative devices in THz frequencies.
A VAEBayesian Deep Learning Scheme for Solar Generation Forecasting based on Dimensionality Reduction ; The advancement of distributed generation technologies in modern power systems has led to a widespread integration of renewable power generation at customer side. However, the intermittent nature of renewable energy poses new challenges to the network operational planning with underlying uncertainties. This paper proposes a novel Bayesian probabilistic technique for forecasting renewable solar generation by addressing data and model uncertainties by integrating bidirectional long shortterm memory BiLSTM neural networks while compressing the weight parameters using variational autoencoder VAE. Existing Bayesian deep learning methods suffer from high computational complexities as they require to draw a large number of samples from weight parameters expressed in the form of probability distributions. The proposed method can deal with uncertainty present in model and data in a more computationally efficient manner by reducing the dimensionality of model parameters. The proposed method is evaluated using quantile loss, reconstruction error, and deterministic forecasting evaluation metrics such as rootmean square error. It is inferred from the numerical results that VAEBayesian BiLSTM outperforms other probabilistic and deterministic deep learning methods for solar power forecasting in terms of accuracy and computational efficiency for different sizes of the dataset.
Locate then Segment A Strong Pipeline for Referring Image Segmentation ; Referring image segmentation aims to segment the objects referred by a natural language expression. Previous methods usually focus on designing an implicit and recurrent feature interaction mechanism to fuse the visuallinguistic features to directly generate the final segmentation mask without explicitly modeling the localization information of the referent instances. To tackle these problems, we view this task from another perspective by decoupling it into a LocateThenSegment LTS scheme. Given a language expression, people generally first perform attention to the corresponding target image regions, then generate a fine segmentation mask about the object based on its context. The LTS first extracts and fuses both visual and textual features to get a crossmodal representation, then applies a crossmodel interaction on the visualtextual features to locate the referred object with position prior, and finally generates the segmentation result with a lightweight segmentation network. Our LTS is simple but surprisingly effective. On three popular benchmark datasets, the LTS outperforms all the previous stateoftheart methods by a large margin e.g., 3.2 on RefCOCO and 3.4 on RefCOCOg. In addition, our model is more interpretable with explicitly locating the object, which is also proved by visualization experiments. We believe this framework is promising to serve as a strong baseline for referring image segmentation.
Generalized solutions to a chemotaxisNavierStokes system with arbitrary superlinear degradation ; In this work, we study a chemotaxisNavierStokes model in a twodimensional setting as below, begineqnarray left beginarrayllll displaystyle ntmathbfucdotnabla nDelta nnabla cdotnnabla cfn, xinOmega,,t0, displaystyle ctmathbfucdotnabla cDelta c c n, xinOmega,,t0, displaystyle mathbfutkappamathbfucdotnablamathbfuDelta mathbfu nabla P nnablaphi, xinOmega,,t0, displaystyle nablacdotmathbfu0,xinOmega,,t0. endarray right. endeqnarray Motivated by a recent work due to Winkler, we aim at investigating generalized solvability for the model the without imposing a critical superlinear exponent restriction on the logistic source function f. Specifically, it is proven in the present work that there exists a triple of integrable functions n,c,mathbfu solving the system globally in a generalized sense provided that fin C10,infty satisfies f0ge0 and fnle rnmu ngamma nge0 with any gamma1. Our result indicates that persistent Diractype singularities can be ruled out in our model under the aforementioned mild assumption on f. After giving the existence result for the system, we also show that the generalized solution exhibits eventual smoothness as long as mur is sufficiently large.
TEOBResumS assessment of consistent nexttoquasicircular corrections and postadiabatic approximation in multipolar binary black holes waveforms ; The use of effectiveonebody EOB waveforms for black hole binaries analysis in gravitationalwave astronomy requires faithful models and fast generation times. A key aspect to achieve faithfulness is the inclusion of numericalrelativity NR informed nexttoquasicircular correctionsNQC, dependent on the radial momentum, to the waveform and radiation reaction. A robust method to speed up the waveform generation is the postadiabatic iteration to approximate the solution of the EOB Hamiltonian equations. In this work, we assess the performances of a fast NQC prescription in combination to the postadiabatic method for generating multipolar gravitational waves. The outlined approach allows a consistent treatment of NQC in both the waveform and the radiationreaction, does not require iterative procedures to achieve high faithfulness, and can be efficiently employed for parameter estimation. Comparing to 611 NR simulations, for total mass 10Modotleq M leq 200Modot and using the Advanded LIGO noise, the model has EOBNR unfaithfulness well below 0.01, with 78.5 of the cases below 0.001. We apply the model to the parameter estimation of GW150914 exploring the impact of the new NQC and of the higher modes up to ellm8.
Radiative transfer with opacity distribution functions Application to narrow band filters ; Modelling of stellar radiative intensities in various spectral passbands plays an important role in stellar physics. At the same time the direct calculations of the highresolution spectrum and then integrating it over the given spectral passband is computationally demanding due to the vast number of atomic and molecular lines. This is particularly so when employing threedimensional 3D models of stellar atmospheres. To accelerate the calculations, one can employ approximate methods, e.g., the use of Opacity Distribution Functions ODFs. Generally, ODFs provide a good approximation of traditional spectral synthesis i.e., computation of intensities through filters with strictly rectangular transmission function. However, their performance strongly deteriorates when the filter transmission noticeably changes within its passband, which is the case for almost all filters routinely used in stellar physics. In this context, the aims of this paper are a to generalize the ODFs method for calculating intensities through filters with arbitrary transmission functions; b to study the performance of the standard and generalized ODFs methods for calculating intensities emergent from 3D models of stellar atmosphere. For this purpose we use the newlydeveloped MPSATLAS radiative transfer code to compute intensities emergent 3D cubes simulated with the radiative magnetohydrodynamics code MURaM. The calculations are performed in the 1.5D regime, i.e., along many parallel rays passing through the simulated cube. We demonstrate that generalized ODFs method allows accurate and fast syntheses of spectral intensities and their centretolimb variations.
MineGAN Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains ; GANs largely increases the potential impact of generative models. Therefore, we propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs. This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain. Mining effectively steers GAN sampling towards suitable regions of the latent space, which facilitates the posterior finetuning and avoids pathologies of other methods, such as mode collapse and lack of flexibility. Furthermore, to prevent overfitting on small target domains, we introduce sparse subnetwork selection, that restricts the set of trainable neurons to those that are relevant for the target dataset. We perform comprehensive experiments on several challenging datasets using various GAN architectures BigGAN, Progressive GAN, and StyleGAN and show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods. In addition, MineGAN can successfully transfer knowledge from multiple pretrained GANs.
Evading the Simplicity Bias Training a Diverse Set of Models Discovers Solutions with Superior OOD Generalization ; Neural networks trained with SGD were recently shown to rely preferentially on linearlypredictive features and can ignore complex, equallypredictive ones. This simplicity bias can explain their lack of robustness out of distribution OOD. The more complex the task to learn, the more likely it is that statistical artifacts i.e. selection biases, spurious correlations are simpler than the mechanisms to learn. We demonstrate that the simplicity bias can be mitigated and OOD generalization improved. We train a set of similar models to fit the data in different ways using a penalty on the alignment of their input gradients. We show theoretically and empirically that this induces the learning of more complex predictive patterns. OOD generalization fundamentally requires information beyond i.i.d. examples, such as multiple training environments, counterfactual examples, or other side information. Our approach shows that we can defer this requirement to an independent model selection stage. We obtain SOTA results in visual recognition on biased data and generalization across visual domains. The method the first to evade the simplicity bias highlights the need for a better understanding and control of inductive biases in deep learning.
SemiSupervised Domain Generalization with Stochastic StyleMatch ; Ideally, visual learning algorithms should be generalizable, for dealing with any unseen domain shift when deployed in a new target environment; and dataefficient, for reducing development costs by using as little labels as possible. To this end, we study semisupervised domain generalization SSDG, which aims to learn a domaingeneralizable model using multisource, partiallylabeled training data. We design two benchmarks that cover stateoftheart methods developed in two related fields, i.e., domain generalization DG and semisupervised learning SSL. We find that the DG methods, which by design are unable to handle unlabeled data, perform poorly with limited labels in SSDG; the SSL methods, especially FixMatch, obtain much better results but are still far away from the basic vanilla model trained using full labels. We propose StyleMatch, a simple approach that extends FixMatch with a couple of new ingredients tailored for SSDG 1 stochastic modeling for reducing overfitting in scarce labels, and 2 multiview consistency learning for enhancing domain generalization. Despite the concise designs, StyleMatch achieves significant improvements in SSDG. We hope our approach and the comprehensive benchmarks can pave the way for future research on generalizable and dataefficient learning systems. The source code is released at urlhttpsgithub.comKaiyangZhoussdgbenchmark.
Braided matter interactions in quantum gravity via 1handle attachment ; In a topological description of elementary matter proposed by BilsonThompson, the leptons and quarks of a single generation, together with the electroweak gauge bosons, are represented as elements of the framed braid group of three ribbons. By identifying these braids with emergent topological excitations of ribbon networks, it has been possible to encode this braid model into the framework of quantum geometry provided by loop quantum gravity. In the case of trivalent networks, it has not been possible to generate particle interactions, because the braids correspond to noiseless subsystems, meaning they commute with the evolution algebra generated by the local Pachner moves. In the case of tetravalent networks, interactions are only possible when the model's original simplicity, in which interactions take place via the composition of braids, is sacrificed. We demonstrate that it possible to preserve both the original classification of fermions, as well as their interaction via the braid product, if we embed the braid in a trivalent scheme, and supplement the local Pachner moves, with a nonlocal and graph changing 1handle attachment. Moreover, we use KauffmanLins recoupling theory to obtain invariants of braided networks that distinguish topological configurations associated to particles in the BilsonThompson model.
Training Robust Graph Neural Networks with Topology Adaptive Edge Dropping ; Graph neural networks GNNs are processing architectures that exploit graph structural information to model representations from network data. Despite their success, GNNs suffer from suboptimal generalization performance given limited training data, referred to as overfitting. This paper proposes Topology Adaptive Edge Dropping TADropEdge method as an adaptive data augmentation technique to improve generalization performance and learn robust GNN models. We start by explicitly analyzing how random edge dropping increases the data diversity during training, while indicating i.i.d. edge dropping does not account for graph structural information and could result in noisy augmented data degrading performance. To overcome this issue, we consider graph connectivity as the key property that captures graph topology. TADropEdge incorporates this factor into random edge dropping such that the edgedropped subgraphs maintain similar topology as the underlying graph, yielding more satisfactory data augmentation. In particular, TADropEdge first leverages the graph spectrum to assign proper weights to graph edges, which represent their criticality for establishing the graph connectivity. It then normalizes the edge weights and drops graph edges adaptively based on their normalized weights. Besides improving generalization performance, TADropEdge reduces variance for efficient training and can be applied as a generic method modular to different GNN models. Intensive experiments on reallife and synthetic datasets corroborate theory and verify the effectiveness of the proposed method.
Improved Transformer for HighResolution GANs ; Attentionbased models, exemplified by the Transformer, can effectively model long range dependency, but suffer from the quadratic complexity of selfattention operation, making them difficult to be adopted for highresolution image generation based on Generative Adversarial Networks GANs. In this paper, we introduce two key ingredients to Transformer to address this challenge. First, in lowresolution stages of the generative process, standard global selfattention is replaced with the proposed multiaxis blocked selfattention which allows efficient mixing of local and global attention. Second, in highresolution stages, we drop selfattention while only keeping multilayer perceptrons reminiscent of the implicit neural function. To further improve the performance, we introduce an additional selfmodulation component based on crossattention. The resulting model, denoted as HiT, has a nearly linear computational complexity with respect to the image size and thus directly scales to synthesizing high definition images. We show in the experiments that the proposed HiT achieves stateoftheart FID scores of 30.83 and 2.95 on unconditional ImageNet 128 times 128 and FFHQ 256 times 256, respectively, with a reasonable throughput. We believe the proposed HiT is an important milestone for generators in GANs which are completely free of convolutions. Our code is made publicly available at httpsgithub.comgoogleresearchhitgan
Grounding SpatioTemporal Language with Transformers ; Language is an interface to the outside world. In order for embodied agents to use it, language must be grounded in other, sensorimotor modalities. While there is an extended literature studying how machines can learn grounded language, the topic of how to learn spatiotemporal linguistic concepts is still largely uncharted. To make progress in this direction, we here introduce a novel spatiotemporal language grounding task where the goal is to learn the meaning of spatiotemporal descriptions of behavioral traces of an embodied agent. This is achieved by training a truth function that predicts if a description matches a given history of observations. The descriptions involve timeextended predicates in past and present tense as well as spatiotemporal references to objects in the scene. To study the role of architectural biases in this task, we train several models including multimodal Transformer architectures; the latter implement different attention computations between words and objects across space and time. We test models on two classes of generalization 1 generalization to randomly heldout sentences; 2 generalization to grammar primitives. We observe that maintaining object identity in the attention computation of our Transformers is instrumental to achieving good performance on generalization overall, and that summarizing object traces in a single token has little influence on performance. We then discuss how this opens new perspectives for languageguided autonomous embodied agents. We also release our code under opensource license as well as pretrained models and datasets to encourage the wider community to build upon and extend our work in the future.
Iterative Feature Matching Toward Provable Domain Generalization with Logarithmic Environments ; Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments. Despite a proliferation of proposal algorithms for this task, assessing their performance both theoretically and empirically is still very challenging. Distributional matching algorithms such as Conditional Domain Adversarial Networks Ganin et al., 2016, Long et al., 2018 are popular and enjoy empirical success, but they lack formal guarantees. Other approaches such as Invariant Risk Minimization IRM require a prohibitively large number of training environments linear in the dimension of the spurious feature space ds even on simple data models like the one proposed by Rosenfeld et al., 2021. Under a variant of this model, we show that both ERM and IRM cannot generalize with ods environments. We then present an iterative feature matching algorithm that is guaranteed with high probability to yield a predictor that generalizes after seeing only Olog ds environments. Our results provide the first theoretical justification for a family of distributionmatching algorithms widely used in practice under a concrete nontrivial data model.
Cosmological consequences of a scalar field with oscillating equation of state. III. Unifying inflation with dark energy and small tensortoscalar ratio ; We investigate the inflationary consequences of the oscillating dark energy model proposed by Ti'an hrefhttpsdoi.org10.1103PhysRevD.101.063531Phys. Rev. D bf 101, 063531 2020, which aims to solve the cosmological coincidence problem with multiaccelerating Universe MAU. We point out that the inflationary dynamics belong to slowroll inflation. The spectral index of scalar perturbations and the tensortoscalar ratio r are shown to be consistent with current textitPlanck measurements. Especially, this model predicts rsim107, which is far below the observation limits. This result motivates us to explore the smallness of r in the general MAU. We propose a quintessential generalization of the original model and prove r0.01 in general. The null detection to date of primordial gravitational waves provides a circumstantial evidence for the MAU. After the end of inflation, the scalar field rolls toward infinity instead of a local minimum, and meanwhile its equation of state is oscillating with an average value larger than 13. In this framework, we show that gravitational particle creation at the end of inflation is capable of reheating the Universe.
Efficient Realistic Data Generation Framework leveraging Deep Learningbased Human Digitization ; The performance of supervised deep learning algorithms depends significantly on the scale, quality and diversity of the data used for their training. Collecting and manually annotating large amount of data can be both timeconsuming and costly tasks to perform. In the case of tasks related to visual humancentric perception, the collection and distribution of such data may also face restrictions due to legislation regarding privacy. In addition, the design and testing of complex systems, e.g., robots, which often employ deep learningbased perception models, may face severe difficulties as even stateoftheart methods trained on real and largescale datasets cannot always perform adequately due to not having been adapted to the visual differences between the virtual and the real world data. As an attempt to tackle and mitigate the effect of these issues, we present a method that automatically generates realistic synthetic data with annotations for a person detection, b face recognition, and c human pose estimation. The proposed method takes as input real background images and populates them with human figures in various poses. Instead of using handmade 3D human models, we propose the use of models generated through deep learning methods, further reducing the dataset creation costs, while maintaining a high level of realism. In addition, we provide opensource and easy to use tools that implement the proposed pipeline, allowing for generating highlyrealistic synthetic datasets for a variety of tasks. A benchmarking and evaluation in the corresponding tasks shows that synthetic data can be effectively used as a supplement to real data.
UncertaintyGuided Progressive GANs for Medical Image Translation ; Imagetoimage translation plays a vital role in tackling various medical imaging tasks such as attenuation correction, motion correction, undersampled reconstruction, and denoising. Generative adversarial networks have been shown to achieve the stateoftheart in generating high fidelity images for these tasks. However, the stateoftheart GANbased frameworks do not estimate the uncertainty in the predictions made by the network that is essential for making informed medical decisions and subsequent revision by medical experts and has recently been shown to improve the performance and interpretability of the model. In this work, we propose an uncertaintyguided progressive learning scheme for imagetoimage translation. By incorporating aleatoric uncertainty as attention maps for GANs trained in a progressive manner, we generate images of increasing fidelity progressively. We demonstrate the efficacy of our model on three challenging medical image translation tasks, including PET to CT translation, undersampled MRI reconstruction, and MRI motion artefact correction. Our model generalizes well in three different tasks and improves performance over state of the art under fullsupervision and weaksupervision with limited data. Code is released here httpsgithub.comExplainableMLUncerGuidedI2I
Towards establishing formal verification and inductive code synthesis in the PLC domain ; Nowadays, formal methods are used in various areas for the verification of programs or for code generation from models in order to increase the quality of software and to reduce costs. However, there are still fields in which formal methods have not been widely adopted, despite the large set of possible benefits offered. This is the case for the area of programmable logic controllers PLC. This article aims to evaluate the potential of formal methods in the context of PLC development. For this purpose, the general concepts of formal methods are first introduced and then transferred to the PLC area, resulting in an engineeringoriented description of the technology that is based on common concepts from PLC development. Based on this description, PLC professionals with varying degrees of experience were interviewed for their perspective on the topic and to identify possible use cases within the PLC domain. The survey results indicate the technology's high potential in the PLC area, either as a tool to directly support the developer or as a key element within a modelbased systems engineering toolchain. The evaluation of the survey results is performed with the aid of a demo application that communicates with the Totally Integrated Automation Portal from Siemens and generates programs via Fastsynth, a modelbased open source code generator. Benchmarks based on an industryrelated PLC project show satisfactory synthesis times and a successful integration into the workflow of a PLC developer.
Adversarial Robustness of Deep Code Comment Generation ; Deep neural networks DNNs have shown remarkable performance in a variety of domains such as computer vision, speech recognition, or natural language processing. Recently they also have been applied to various software engineering tasks, typically involving processing source code. DNNs are wellknown to be vulnerable to adversarial examples, i.e., fabricated inputs that could lead to various misbehaviors of the DNN model while being perceived as benign by humans. In this paper, we focus on the code comment generation task in software engineering and study the robustness issue of the DNNs when they are applied to this task. We propose ACCENT, an identifier substitution approach to craft adversarial code snippets, which are syntactically correct and semantically close to the original code snippet, but may mislead the DNNs to produce completely irrelevant code comments. In order to improve the robustness, ACCENT also incorporates a novel training method, which can be applied to existing code comment generation models. We conduct comprehensive experiments to evaluate our approach by attacking the mainstream encoderdecoder architectures on two largescale publicly available datasets. The results show that ACCENT efficiently produces stable attacks with functionalitypreserving adversarial examples, and the generated examples have better transferability compared with baselines. We also confirm, via experiments, the effectiveness in improving model robustness with our training method.
Analysis of an Incomplete Binary Outcome Dichotomized From an Underlying Continuous Variable in Clinical Trials ; In many clinical trials, outcomes of interest include binaryvalued endpoints. It is not uncommon that a binaryvalued outcome is dichotomized from a continuous outcome at a threshold of clinical interest. To reach the objective, common approaches include a fitting the generalized linear mixed model GLMM to the dichotomized longitudinal binary outcome and b imputation method MI imputing the missing values in the continuous outcome, dichotomizing it into a binary outcome, and then fitting the generalized linear model for the complete data. We conducted comprehensive simulation studies to compare the performance of GLMM with MI for estimating risk difference and logarithm of odds ratio between two treatment arms at the end of study. In those simulation studies, we considered a range of multivariate distribution options for the continuous outcome including a multivariate normal distribution, a multivariate tdistribution, a multivariate lognormal distribution, and the empirical distribution from a real clinical trial data to evaluate the robustness of the estimators to various datagenerating models. Simulation results demonstrate that both methods work well under those considered distribution options, but MI is more efficient with smaller mean squared errors compared to GLMM. We further applied both the GLMM and MI to 29 phase 3 diabetes clinical trials, and found that the MI method generally led to smaller variance estimates compared to GLMM.
Security and Privacy Enhanced Gait Authentication with Random Representation Learning and Digital Lockers ; Gait data captured by inertial sensors have demonstrated promising results on user authentication. However, most existing approaches stored the enrolled gait pattern insecurely for matching with the validating pattern, thus, posed critical security and privacy issues. In this study, we present a gait cryptosystem that generates from gait data the random key for user authentication, meanwhile, secures the gait pattern. First, we propose a revocable and random binary string extraction method using a deep neural network followed by featurewise binarization. A novel loss function for network optimization is also designed, to tackle not only the intrauser stability but also the interuser randomness. Second, we propose a new biometric key generation scheme, namely Irreversible Error Correct and Obfuscate IECO, improved from the Error Correct and Obfuscate ECO scheme, to securely generate from the binary string the random and irreversible key. The model was evaluated with two benchmark datasets as OUISIR and whuGAIT. We showed that our model could generate the key of 139 bits from 5second data sequence with zero False Acceptance Rate FAR and False Rejection Rate FRR smaller than 5.441. In addition, the security and user privacy analyses showed that our model was secure against existing attacks on biometric template protection, and fulfilled irreversibility and unlinkability.
BOSS Bidirectional OneShot Synthesis of Adversarial Examples ; The design of additive imperceptible perturbations to the inputs of deep classifiers to maximize their misclassification rates is a central focus of adversarial machine learning. An alternative approach is to synthesize adversarial examples from scratch using GANlike structures, albeit with the use of large amounts of training data. By contrast, this paper considers oneshot synthesis of adversarial examples; the inputs are synthesized from scratch to induce arbitrary soft predictions at the output of pretrained models, while simultaneously maintaining high similarity to specified inputs. To this end, we present a problem that encodes objectives on the distance between the desired and output distributions of the trained model and the similarity between such inputs and the synthesized examples. We prove that the formulated problem is NPcomplete. Then, we advance a generative approach to the solution in which the adversarial examples are obtained as the output of a generative network whose parameters are iteratively updated by optimizing surrogate loss functions for the dualobjective. We demonstrate the generality and versatility of the framework and approach proposed through applications to the design of targeted adversarial attacks, generation of decision boundary samples, and synthesis of low confidence classification inputs. The approach is further extended to an ensemble of models with different soft output specifications. The experimental results verify that the targeted and confidence reduction attack methods developed perform on par with stateoftheart algorithms.
Audio2Gestures Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders ; Generating conversational gestures from speech audio is challenging due to the inherent onetomany mapping between audio and body motions. Conventional CNNsRNNs assume onetoone mapping, and thus tend to predict the average of all possible target motions, resulting in plainboring motions during inference. In order to overcome this problem, we propose a novel conditional variational autoencoder VAE that explicitly models onetomany audiotomotion mapping by splitting the crossmodal latent code into shared code and motionspecific code. The shared code mainly models the strong correlation between audio and motion such as the synchronized audio and motion beats, while the motionspecific code captures diverse motion information independent of the audio. However, splitting the latent code into two parts poses training difficulties for the VAE model. A mapping network facilitating random sampling along with other techniques including relaxed motion loss, bicycle constraint, and diversity loss are designed to better train the VAE. Experiments on both 3D and 2D motion datasets verify that our method generates more realistic and diverse motions than stateoftheart methods, quantitatively and qualitatively. Finally, we demonstrate that our method can be readily used to generate motion sequences with userspecified motion clips on the timeline. Code and more results are at httpsjingli513.github.ioaudio2gestures.
Training for the Future A Simple Gradient Interpolation Loss to Generalize Along Time ; In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions. Such models are often retrained on new data periodically, and they hence need to generalize to data not too far into the future. In this context, there is much prior work on enhancing temporal generalization, e.g. continuous transportation of past data, kernel smoothed timesensitive parameters and more recently, adversarial learning of timeinvariant features. However, these methods share several limitations, e.g, poor scalability, training instability, and dependence on unlabeled data from the future. Responding to the above limitations, we propose a simple method that starts with a model with timesensitive parameters but regularizes its temporal complexity using a Gradient Interpolation GI loss. GI allows the decision boundary to change along time and can still prevent overfitting to the limited training time snapshots by allowing taskspecific control over changes along time. We compare our method to existing baselines on multiple realworld datasets, which show that GI outperforms more complicated generative and adversarial approaches on the one hand, and simpler gradient regularization methods on the other.
When and how epochwise double descent happens ; Deep neural networks are known to exhibit a double descent' behavior as the number of parameters increases. Recently, it has also been shown that an epochwise double descent' effect exists in which the generalization error initially drops, then rises, and finally drops again with increasing training time. This presents a practical problem in that the amount of time required for training is long, and early stopping based on validation performance may result in suboptimal generalization. In this work we develop an analytically tractable model of epochwise double descent that allows us to characterise theoretically when this effect is likely to occur. This model is based on the hypothesis that the training data contains features that are slow to learn but informative. We then show experimentally that deep neural networks behave similarly to our theoretical model. Our findings indicate that epochwise double descent requires a critical amount of noise to occur, but above a second critical noise level early stopping remains effective. Using insights from theory, we give two methods by which epochwise double descent can be removed one that removes slow to learn features from the input and reduces generalization performance, and another that instead modifies the training dynamics and matches or exceeds the generalization performance of standard training. Taken together, our results suggest a new picture of how epochwise double descent emerges from the interplay between the dynamics of training and noise in the training data.
Stagewise Unsupervised Domain Adaptation with Adversarial SelfTraining for Road Segmentation of Remote Sensing Images ; Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials. Deep neural networks have advanced this field by leveraging the power of largescale labeled data, which, however, are extremely expensive and timeconsuming to acquire. One solution is to use cheap available data to train a model and deploy it to directly process the data from a specific application domain. Nevertheless, the wellknown domain shift DS issue prevents the trained model from generalizing well on the target domain. In this paper, we propose a novel stagewise domain adaptation model called RoadDA to address the DS issue in this field. In the first stage, RoadDA adapts the target domain features to align with the source ones via generative adversarial networks GAN based interdomain adaptation. Specifically, a feature pyramid fusion module is devised to avoid information loss of long and thin roads and learn discriminative and robust features. Besides, to address the intradomain discrepancy in the target domain, in the second stage, we propose an adversarial selftraining method. We generate the pseudo labels of target domain using the trained generator and divide it to labeled easy split and unlabeled hard split based on the road confidence scores. The features of hard split are adapted to align with the easy ones using adversarial learning and the intradomain adaptation process is repeated to progressively improve the segmentation performance. Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms stateoftheart methods.
Stochastic Optimal Operation of the VSCMTDC System with FACTS Devices to Integrate Wind Power ; This paper proposes to use stochastic conic programming to address the challenge of largescale wind power integration to the power system. Multiple wind farms are connected through the voltage source converter VSC based multiterminal DC VSCMTDC system to the power network supported by the Flexible AC Transmission System FACTS. The optimal operation of the power network incorporating the VSCMTDC system and FACTS devices is formulated in a stochastic conic programming framework accounting the uncertainties of the wind power generation. A methodology to generate representative scenarios of power generations from the wind farms is proposed using wind speed measurements and wind turbine models. The nonconvex transmission network constraints including the VSCMTDC system and FACTS devices are convexified through the proposed secondorder cone AC optimal power flow model SOCACOPF that can be solved to the global optimality using interior point method. In order to tackle the computational challenge due to the large number of wind power scenarios, a modified Benders decomposition algorithm MBDA accelerated by parallel computation is proposed. The energy dispatch of conventional power generators is formulated as the master problem of MBDA. Numerical results for up to 50000 wind power scenarios show that the proposed MBDA approach to solve stochastic SOCACOPF outperforms the traditional singlestage without decomposition solution approach in both convergence capability and computational efficiency. The feasibility performance of the proposed stochastic SOCACOPF model is also demonstrated.
Font Completion and Manipulation by Cycling Between MultiModality Representations ; Generating font glyphs of consistent style from one or a few reference glyphs, i.e., font completion, is an important task in topographical design. As the problem is more welldefined than general image style transfer tasks, thus it has received interest from both vision and machine learning communities. Existing approaches address this problem as a direct imagetoimage translation task. In this work, we innovate to explore the generation of font glyphs as 2D graphic objects with the graph as an intermediate representation, so that more intrinsic graphic properties of font styles can be captured. Specifically, we formulate a crossmodality cycled imagetoimage model structure with a graph constructor between an image encoder and an image renderer. The novel graph constructor maps a glyph's latent code to its graph representation that matches expert knowledge, which is trained to help the translation task. Our model generates improved results than both imagetoimage baseline and previous stateoftheart methods for glyph completion. Furthermore, the graph representation output by our model also provides an intuitive interface for users to do local editing and manipulation. Our proposed crossmodality cycled representation learning has the potential to be applied to other domains with prior knowledge from different data modalities. Our code is available at httpsgithub.comVITAGroupFontCompletionGraph.
Towards Finegrained Image Classification with Generative Adversarial Networks and Facial Landmark Detection ; Finegrained classification remains a challenging task because distinguishing categories needs learning complex and local differences. Diversity in the pose, scale, and position of objects in an image makes the problem even more difficult. Although the recent Vision Transformer models achieve high performance, they need an extensive volume of input data. To encounter this problem, we made the best use of GANbased data augmentation to generate extra dataset instances. OxfordIIIT Pets was our dataset of choice for this experiment. It consists of 37 breeds of cats and dogs with variations in scale, poses, and lighting, which intensifies the difficulty of the classification task. Furthermore, we enhanced the performance of the recent Generative Adversarial Network GAN, StyleGAN2ADA model to generate more realistic images while preventing overfitting to the training set. We did this by training a customized version of MobileNetV2 to predict animal facial landmarks; then, we cropped images accordingly. Lastly, we combined the synthetic images with the original dataset and compared our proposed method with standard GANs augmentation and no augmentation with different subsets of training data. We validated our work by evaluating the accuracy of finegrained image classification on the recent Vision Transformer ViT Model.
Generating Datasets of 3D Garments with Sewing Patterns ; Garments are ubiquitous in both real and many of the virtual worlds. They are highly deformable objects, exhibit an immense variety of designs and shapes, and yet, most garments are created from a set of regularly shaped flat pieces. Exploration of garment structure presents a peculiar case for an object structure estimation task and might prove useful for downstream tasks of neural 3D garment modeling and reconstruction by providing strong prior on garment shapes. To facilitate research in these directions, we propose a method for generating large synthetic datasets of 3D garment designs and their sewing patterns. Our method consists of a flexible description structure for specifying parametric sewing pattern templates and the automatic generation pipeline to produce garment 3D models with littletonone manual intervention. To add realism, the pipeline additionally creates corrupted versions of the final meshes that imitate artifacts of 3D scanning. With this pipeline, we created the first largescale synthetic dataset of 3D garment models with their sewing patterns. The dataset contains more than 20000 garment design variations produced from 19 different base types. Seven of these garment types are specifically designed to target evaluation of the generalization across garment sewing pattern topologies.
CPT A PreTrained Unbalanced Transformer for Both Chinese Language Understanding and Generation ; In this paper, we take the advantage of previous pretrained models PTMs and propose a novel Chinese Pretrained Unbalanced Transformer CPT. Different from previous Chinese PTMs, CPT is designed to utilize the shared knowledge between natural language understanding NLU and natural language generation NLG to boost the performance. CPT consists of three parts a shared encoder, an understanding decoder, and a generation decoder. Two specific decoders with a shared encoder are pretrained with masked language modeling MLM and denoising autoencoding DAE tasks, respectively. With the partially shared architecture and multitask pretraining, CPT can 1 learn specific knowledge of both NLU or NLG tasks with two decoders and 2 be finetuned flexibly that fully exploits the potential of the model. Moreover, the unbalanced Transformer saves the computational and storage cost, which makes CPT competitive and greatly accelerates the inference of text generation. Experimental results on a wide range of Chinese NLU and NLG tasks show the effectiveness of CPT.
CommonsenseFocused Dialogues for Response Generation An Empirical Study ; Smooth and effective communication requires the ability to perform latent or explicit commonsense inference. Prior commonsense reasoning benchmarks such as SocialIQA and CommonsenseQA mainly focus on the discriminative task of choosing the right answer from a set of candidates, and do not involve interactive language generation as in dialogue. Moreover, existing dialogue datasets do not explicitly focus on exhibiting commonsense as a facet. In this paper, we present an empirical study of commonsense in dialogue response generation. We first autoextract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet, a commonsense knowledge graph. Furthermore, building on social contextssituations in SocialIQA, we collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting. We evaluate response generation models trained using these datasets and find that models trained on both extracted and our collected data produce responses that consistently exhibit more commonsense than baselines. Finally we propose an approach for automatic evaluation of commonsense that relies on features derived from ConceptNet and pretrained language and dialog models, and show reasonable correlation with human evaluation of responses' commonsense quality. We are releasing a subset of our collected data, CommonsenseDialogues, containing about 11K dialogs.
Domain Composition and Attention for UnseenDomain Generalizable Medical Image Segmentation ; Domain generalizable model is attracting increasing attention in medical image analysis since data is commonly acquired from different institutes with various imaging protocols and scanners. To tackle this challenging domain generalization problem, we propose a Domain Composition and Attentionbased network DCANet to improve the ability of domain representation and generalization. First, we present a domain composition method that represents one certain domain by a linear combination of a set of basis representations i.e., a representation bank. Second, a novel plugandplay parallel domain preceptor is proposed to learn these basis representations and we introduce a divergence constraint function to encourage the basis representations to be as divergent as possible. Then, a domain attention module is proposed to learn the linear combination coefficients of the basis representations. The result of linear combination is used to calibrate the feature maps of an input image, which enables the model to generalize to different and even unseen domains. We validate our method on public prostate MRI dataset acquired from six different institutions with apparent domain shift. Experimental results show that our proposed model can generalize well on different and even unseen domains and it outperforms stateoftheart methods on the multidomain prostate segmentation task.
Learning to Robustly Aggregate Labeling Functions for Semisupervised Data Programming ; A critical bottleneck in supervised machine learning is the need for large amounts of labeled data which is expensive and time consuming to obtain. However, it has been shown that a small amount of labeled data, while insufficient to retrain a model, can be effectively used to generate humaninterpretable labeling functions LFs. These LFs, in turn, have been used to generate a large amount of additional noisy labeled data, in a paradigm that is now commonly referred to as data programming. However, previous approaches to automatically generate LFs make no attempt to further use the given labeled data for model training, thus giving up opportunities for improved performance. Moreover, since the LFs are generated from a relatively small labeled dataset, they are prone to being noisy, and naively aggregating these LFs can lead to very poor performance in practice. In this work, we propose an LF based reweighting framework ouralgo to solve these two critical limitations. Our algorithm learns a joint model on the same labeled dataset used for LF induction along with any unlabeled data in a semisupervised manner, and more critically, reweighs each LF according to its goodness, influencing its contribution to the semisupervised loss using a robust bilevel optimization algorithm. We show that our algorithm significantly outperforms prior approaches on several text classification datasets.
Paint4Poem A Dataset for Artistic Visualization of Classical Chinese Poems ; In this work we propose a new task artistic visualization of classical Chinese poems, where the goal is to generatepaintings of a certain artistic style for classical Chinese poems. For this purpose, we construct a new dataset called Paint4Poem. Thefirst part of Paint4Poem consists of 301 highquality poempainting pairs collected manually from an influential modern Chinese artistFeng Zikai. As its small scale poses challenges for effectively training poemtopainting generation models, we introduce the secondpart of Paint4Poem, which consists of 3,648 captionpainting pairs collected manually from Feng Zikai's paintings and 89,204 poempainting pairs collected automatically from the web. We expect the former to help learning the artist painting style as it containshis most paintings, and the latter to help learning the semantic relevance between poems and paintings. Further, we analyze Paint4Poem regarding poem diversity, painting style, and the semantic relevance between poems and paintings. We create abenchmark for Paint4Poem we train two representative texttoimage generation models AttnGAN and MirrorGAN, and evaluate theirperformance regarding painting pictorial quality, painting stylistic relevance, and semantic relevance between poems and paintings.The results indicate that the models are able to generate paintings that have good pictorial quality and mimic Feng Zikai's style, but thereflection of poem semantics is limited. The dataset also poses many interesting research directions on this task, including transferlearning, fewshot learning, texttoimage generation for lowresource data etc. The dataset is publicly available.httpsgithub.compaint4poempaint4poem
Optimized Automated Cardiac MR Scar Quantification with GANBased Data Augmentation ; Background The clinical utility of late gadolinium enhancement LGE cardiac MRI is limited by the lack of standardization, and timeconsuming postprocessing. In this work, we tested the hypothesis that a cascaded deep learning pipeline trained with augmentation by synthetically generated data would improve model accuracy and robustness for automated scar quantification. Methods A cascaded pipeline consisting of three consecutive neural networks is proposed, starting with a bounding box regression network to identify a region of interest around the left ventricular LV myocardium. Two further nnUNet models are then used to segment the myocardium and, if present, scar. The models were trained on the data from the EMIDEC challenge, supplemented with an extensive synthetic dataset generated with a conditional GAN. Results The cascaded pipeline significantly outperformed a single nnUNet directly segmenting both the myocardium mean Dice similarity coefficient DSC standard deviation SD 0.84 0.09 vs 0.63 0.20, p 0.01 and scar DSC 0.72 0.34 vs 0.46 0.39, p 0.01 on a perslice level. The inclusion of the synthetic data as data augmentation during training improved the scar segmentation DSC by 0.06 p 0.01. The mean DSC persubject on the challenge test set, for the cascaded pipeline augmented by synthetic generated data, was 0.86 0.03 and 0.67 0.29 for myocardium and scar, respectively. Conclusion A cascaded deep learningbased pipeline trained with augmentation by synthetically generated data leads to myocardium and scar segmentations that are similar to the manual operator, and outperforms direct segmentation without the synthetic images.
Fully Spiking Variational Autoencoder ; Spiking neural networks SNNs can be run on neuromorphic devices with ultrahigh speed and ultralow energy consumption because of their binary and eventdriven nature. Therefore, SNNs are expected to have various applications, including as generative models being running on edge devices to create highquality images. In this study, we build a variational autoencoder VAE with SNN to enable image generation. VAE is known for its stability among generative models; recently, its quality advanced. In vanilla VAE, the latent space is represented as a normal distribution, and floatingpoint calculations are required in sampling. However, this is not possible in SNNs because all features must be binary time series data. Therefore, we constructed the latent space with an autoregressive SNN model, and randomly selected samples from its output to sample the latent variables. This allows the latent variables to follow the Bernoulli process and allows variational learning. Thus, we build the Fully Spiking Variational Autoencoder where all modules are constructed with SNN. To the best of our knowledge, we are the first to build a VAE only with SNN layers. We experimented with several datasets, and confirmed that it can generate images with the same or better quality compared to conventional ANNs. The code is available at httpsgithub.comkamata1729FullySpikingVAE
Improving Zeroshot Multilingual Neural Machine Translation for LowResource Languages ; Although the multilingual Neural Machine TranslationNMT, which extends Google's multilingual NMT, has ability to perform zeroshot translation and the iterative selflearning algorithm can improve the quality of zeroshot translation, it confronts with two problems the multilingual NMT model is prone to generate wrong target language when implementing zeroshot translation; the selflearning algorithm, which uses beam search to generate synthetic parallel data, demolishes the diversity of the generated source language and amplifies the impact of the same noise during the iterative learning process. In this paper, we propose the taggedmultilingual NMT model and improve the selflearning algorithm to handle these two problems. Firstly, we extend the Google's multilingual NMT model and add target tokens to the target languages, which associates the start tag with the target language to ensure that the source language can be translated to the required target language. Secondly, we improve the selflearning algorithm by replacing beam search with random sample to increases the diversity of the generated data and makes it properly cover the true data distribution. Experimental results on IWSLT show that the adjusted taggedmultilingual NMT separately obtains 9.41 and 7.85 BLEU scores over the multilingual NMT on 2010 and 2017 RomanianItalian test sets. Similarly, it obtains 9.08 and 7.99 BLEU scores on ItalianRomanian zeroshot translation. Furthermore, the improved selflearning algorithm shows its superiorities over the conventional selflearning algorithm on zeroshot translations.
Reflecting on chirality CPviolating extensions of the single scalarleptoquark solutions for the g2e, puzzles and their implications for lepton EDMs ; We study the two scalar leptoquarks capable of generating chirallyenhanced, signdependent contributions to lepton magnetic dipole moments MDMs and electric dipole moments EDMs, R2sim mathbf3, mathbf2, 76 and S1sim mathbf3, mathbf1, 13. We consider the case in which the electron and muon sectors are decoupled, and leptoquark couplings are assigned complex values. Adopting the coupling ansatz that the electron dipole operator is generated by charmcontaining loops, and muon dipole operator by topcontaining loops, we find that both minimal leptoquark models remain viable solutions for reconciling anomalies in the muon and electron MDMs, accounting for either of the two current disparate electron MDM results from Cs and Rb interferometry experiments. We also examine the correlated corrections to the muon and electron masses generated by these models, and argue that to minimise finetuning this introduces an upper bound on viable leptoquark phi masses, mphimathcalO4 TeV. Similar arguments allow us to make a prediction for the upper bound of the muon EDM generated by these models, dmu mathcalO1022; e cm, which could be within reach of upcoming experimental programs, including Muon g2 at Fermilab FNAL, and muEDM at Paul Scherrer Institute PSI.
Controllable Recommenders using Deep Generative Models and Disentanglement ; In this paper, we consider controllability as a means to satisfy dynamic preferences of users, enabling them to control recommendations such that their current preference is met. While deep models have shown improved performance for collaborative filtering, they are generally not amenable to fine grained control by a user, leading to the development of methods like deep language critiquing. We propose an alternate view, where instead of keyphrase based critiques, a user is provided 'knobs' in a disentangled latent space, with each knob corresponding to an item aspect. Disentanglement here refers to a latent space where generative factors here, a preference towards an item category like genre are captured independently in their respective dimensions, thereby enabling predictable manipulations, otherwise not possible in an entangled space. We propose using a semisupervised disentanglement objective for this purpose, as well as multiple metrics to evaluate the controllability and the degree of personalization of controlled recommendations. We show that by updating the disentangled latent space based on user feedback, and by exploiting the generative nature of the recommender, controlled and personalized recommendations can be produced. Through experiments on two widely used collaborative filtering datasets, we demonstrate that a controllable recommender can be trained with a slight reduction in recommender performance, provided enough supervision is provided. The recommendations produced by these models appear to both conform to a user's current preference and remain personalized.
Generalizing to New Domains by Mapping Natural Language to Lifted LTL ; Recent work on using natural language to specify commands to robots has grounded that language to LTL. However, mapping natural language task specifications to LTL task specifications using language models require probability distributions over finite vocabulary. Existing stateoftheart methods have extended this finite vocabulary to include unseen terms from the input sequence to improve output generalization. However, novel outofvocabulary atomic propositions cannot be generated using these methods. To overcome this, we introduce an intermediate contextual query representation which can be learned from single positive task specification examples, associating a contextual query with an LTL template. We demonstrate that this intermediate representation allows for generalization over unseen object references, assuming accurate groundings are available. We compare our method of mapping natural language task specifications to intermediate contextual queries against stateoftheart CopyNet models capable of translating natural language to LTL, by evaluating whether correct LTL for manipulation and navigation task specifications can be output, and show that our method outperforms the CopyNet model on unseen object references. We demonstrate that the grounded LTL our method outputs can be used for planning in a simulated OOMDP environment. Finally, we discuss some common failure modes encountered when translating natural language task specifications to grounded LTL.
Collaborative Semantic Aggregation and Calibration for Federated Domain Generalization ; Domain generalization DG aims to learn from multiple known source domains a model that can generalize well to unknown target domains. The existing DG methods usually exploit the fusion of shared multisource data to train a generalizable model. However, tremendous data is distributed across lots of places nowadays that can not be shared due to privacy policies. In this paper, we tackle the problem of federated domain generalization where the source datasets can only be accessed and learned locally for privacy protection. We propose a novel framework called Collaborative Semantic Aggregation and Calibration CSAC to enable this challenging problem. To fully absorb multisource semantic information while avoiding unsafe data fusion, we conduct datafree semantic aggregation by fusing the models trained on the separated domains layerbylayer. To address the semantic dislocation problem caused by domain shift, we further design crosslayer semantic calibration with an attention mechanism to align each semantic level and enhance domain invariance. We unify multisource semantic learning and alignment in a collaborative way by repeating the semantic aggregation and calibration alternately, keeping each dataset localized, and the data privacy is carefully protected. Extensive experiments show the significant performance of our method in addressing this challenging problem.
Neural Dubber Dubbing for Videos According to Scripts ; Dubbing is a postproduction process of rerecording actors' dialogues, which is extensively used in filmmaking and video production. It is usually performed manually by professional voice actors who read lines with proper prosody, and in synchronization with the prerecorded videos. In this work, we propose Neural Dubber, the first neural network model to solve a novel automatic video dubbing AVD task synthesizing human speech synchronized with the given video from the text. Neural Dubber is a multimodal texttospeech TTS model that utilizes the lip movement in the video to control the prosody of the generated speech. Furthermore, an imagebased speaker embedding ISE module is developed for the multispeaker setting, which enables Neural Dubber to generate speech with a reasonable timbre according to the speaker's face. Experiments on the chemistry lecture singlespeaker dataset and LRS2 multispeaker dataset show that Neural Dubber can generate speech audios on par with stateoftheart TTS models in terms of speech quality. Most importantly, both qualitative and quantitative evaluations show that Neural Dubber can control the prosody of synthesized speech by the video, and generate highfidelity speech temporally synchronized with the video. Our project page is at httpstsinghuamarslab.github.ioNeuralDubber .
EventNarrative A largescale Eventcentric Dataset for Knowledge GraphtoText Generation ; We introduce EventNarrative, a knowledge graphtotext dataset from publicly available openworld knowledge graphs. Given the recent advances in eventdriven Information Extraction IE, and that prior research on graphtotext only focused on entitydriven KGs, this paper focuses on eventcentric data. However, our data generation system can still be adapted to other other types of KG data. Existing largescale datasets in the graphtotext area are nonparallel, meaning there is a large disconnect between the KGs and text. The datasets that have a paired KG and text, are small scale and manually generated or generated without a rich ontology, making the corresponding graphs sparse. Furthermore, these datasets contain many unlinked entities between their KG and text pairs. EventNarrative consists of approximately 230,000 graphs and their corresponding natural language text, 6 times larger than the current largest parallel dataset. It makes use of a rich ontology, all of the KGs entities are linked to the text, and our manual annotations confirm a high data quality. Our aim is twofold help break new ground in eventcentric research where data is lacking, and to give researchers a welldefined, largescale dataset in order to better evaluate existing and future knowledge graphtotext models. We also evaluate two types of baseline on EventNarrative a graphtotext specific model and two stateoftheart language models, which previous work has shown to be adaptable to the knowledge graphtotext domain.
Black Hole Images as Tests of General Relativity Effects of Plasma Physics ; The horizonscale images of black holes obtained with the Event Horizon Telescope have provided new probes of their metrics and tests of General Relativity. The images are characterized by a bright, near circular ring from the gravitationally lensed emission from the hot plasma and a deep central depression cast by the black hole. The metric tests rely on fact that the bright ring closely traces the boundary of the black hole shadow with a small displacement that has been quantified using simulations. In this paper we develop a selfconsistent covariant analytic model of the accretion flow that spans a broad range of plasma conditions and blackhole properties to explore the general validity of this result. We show that, for any physical model of the accretion flow, the ring always encompasses the outline of the shadow and is not displaced by it by more than half the ring width. This result is a consequence of conservation laws and basic thermodynamic considerations and does not depend on the microphysics of the plasma or the details of the numerical simulations. We also present a quantitative measurement of the bias between the bright ring and the shadow radius based on the analytical models.
Reconstructing Nonlinear Dynamical Systems from MultiModal Time Series ; Empirically observed time series in physics, biology, or medicine, are commonly generated by some underlying dynamical system DS which is the target of scientific interest. There is an increasing interest to harvest machine learning methods to reconstruct this latent DS in a datadriven, unsupervised way. In many areas of science it is common to sample time series observations from many data modalities simultaneously, e.g. electrophysiological and behavioral time series in a typical neuroscience experiment. However, current machine learning tools for reconstructing DSs usually focus on just one data modality. Here we propose a general framework for multimodal data integration for the purpose of nonlinear DS reconstruction and the analysis of crossmodal relations. This framework is based on dynamically interpretable recurrent neural networks as general approximators of nonlinear DSs, coupled to sets of modalityspecific decoder models from the class of generalized linear models. Both an expectationmaximization and a variational inference algorithm for model training are advanced and compared. We show on nonlinear DS benchmarks that our algorithms can efficiently compensate for too noisy or missing information in one data channel by exploiting other channels, and demonstrate on experimental neuroscience data how the algorithm learns to link different data domains to the underlying dynamics.
Identifying urban air pollution hotspots by dispersion modeling when data are scarce application to diesel generators in Beirut, Lebanon ; Diesel generators are emerging as communityinitiated solutions to compensate for electricity shortage in cities marred by economical crisis andor conflict. The resulting pollution distribution in dense urban environments is a major source of concern to the population. In the absence of periodic observations from properly distributed sensors, as is the case in Beirut, physically based computational modeling stand out as an effective tool for predicting the pollutant distribution in complex environments, and a costeffective framework for investigating whatif scenarios and assessing mitigation strategies. Here, we present a Lagrangian transport modelbased study of PM2.5 dispersion originating from a large number of diesel generators in Beirut. We explore large and small scale dispersion patterns in selected smalls domains and over the entire city. The scenarios considered investigate the impact of topography, atmospheric stability, presence of buildings, diesel generators distribution, and stacks elevations for representative meteorological conditions. Assessment of these scenarios is carried out in terms of small and large scale dispersion patterns and the mean concentration at street level and population exposure proxy indicators. We also report on the efficacy of elevating the stack height as a mitigation measure at different representative wind and atmospheric stability conditions.
Highly Accurate and Reliable Wireless Network Slicing in 5th Generation Networks A Hybrid Deep Learning Approach ; In the current era, the nextgeneration networks like 5th generation 5G and 6th generation 6G networks require high security, low latency with a high reliable standards and capacity. In these networks, reconfigurable wireless network slicing is considered as one of the key elements for 5G and 6G networks. A reconfigurable slicing allows the operators to run various instances of the network using a single infrastructure for a better quality of services QoS. The QoS can be achieved by reconfiguring and optimizing these networks using Artificial intelligence and machine learning algorithms. To develop a smart decisionmaking mechanism for network management and restricting network slice failures, machine learningenabled reconfigurable wireless network solutions are required. In this paper, we propose a hybrid deep learning model that consists of a convolution neural network CNN and long short term memory LSTM. The CNN performs resource allocation, network reconfiguration, and slice selection while the LSTM is used for statistical information load balancing, error rate etc. regarding network slices. The applicability of the proposed model is validated by using multiple unknown devices, slice failure, and overloading conditions. The overall accuracy of 95.17 is achieved by the proposed model that reflects its applicability.
Distributed CONGEST Approximation of Weighted Vertex Covers and Matchings ; We provide CONGEST model algorithms for approximating minimum weighted vertex cover and the maximum weighted matching. For bipartite graphs, we show that a 1varepsilonapproximate weighted vertex cover can be computed deterministically in polylogarithmic time. This generalizes a corresponding result for the unweighted vertex cover problem shown in Faour, Kuhn; OPODIS '20. Moreover, we show that in general weighted graph families that are closed under taking subgraphs and in which we can compute an independent set of weight at least a lambdafraction of the total weight, one can compute a 22lambda varepsilonapproximate weighted vertex cover in polylogarithmic time in the CONGEST model. Our result in particular implies that in graphs of arboricity a, one can compute a 21avarepsilonapproximate weighted vertex cover. For maximum weighted matchings, we show that a 1varepsilonapproximate solution can be computed deterministically in polylogarithmic CONGEST rounds for constant varepsilon. We also provide a more efficient randomized algorithm. Our algorithm generalizes results of Lotker, PattShamir, Pettie; SPAA '08 and BarYehuda, Hillel, Ghaffari, Schwartzman; PODC '17 for the unweighted case. Finally, we show that even in the LOCAL model and in bipartite graphs of degree leq 3, if varepsilonvarepsilon0 for some constant varepsilon00, then computing a 1varepsilonapproximation for the unweighted minimum vertex cover problem requires Omegabigfraclog nvarepsilonbig rounds. This generalizes aresult of Goos, Suomela; DISC '12, who showed that computing a 1varepsilon0approximation in such graphs requires Omegalog n rounds.
ExperienceEnhanced Learning One Size Still does not Fit All in Automatic Database ; Recent years, the database committee has attempted to develop automatic database management systems. Although some researches show that the applying AI to data management is a significant and promising direction, there still exists many problems in implementing these techniques to real applications long training time, various environments and unstable performance. In this paper, we discover that traditional rule based methods have the potential to solve the above problems. We propose three methodologies for improving learned methods, i.e. label collection for efficiently pretraining, knowledge base for model transfer and theoretical guarantee for stable performance. We implement our methodologies on two widely used learning approaches, deep learning and reinforcement learning. Firstly, the novel experience enhanced deep learning EEDL could achieve efficient training and stable performance. We evaluate EEDL with cardinality estimation, an essential database management. The experimental results on four real dataset 1 show that our EEDL could outperforms the general DL model 2. Secondly, we design a novel experienceenhanced reinforcement learning EERL, which could efficiently converge and has better performance than general RL models 3. We test EERL with online index tuning task. The experiments on TPCH shows that EERL could accelerate the convergence of agent and generate better solution that generalizes the reinforcement learning.
Distributed energy control in electric energy systems ; The power interactions of any component in electric energy systems with the rest of the system happen naturally, as governed by the energy conservation principles. There may, however, occur instances when the rate at which power gets generated by one component through local energy conversion is not exactly the same as that absorbed by rest of the system. This is when instabilities get induced. To model and control such instabilities, this paper generalizes the notion of interaction variable used to characterize diverse system components in a unified manner. The same variable captures aggregate systemwide effects and sets reference points for multilayered distributed output feedback control. It has a physical interpretation of instantaneous power and generalized reactive power. The higher layer design utilizes the interactive energy statespace model to derive intermediate reactive power control, which becomes a control command to the lower layer physical model. This command is implemented using either Feedback Linearizing Control FBLC or Sliding Mode Control SMC, for which sufficient stability conditions are stated. This paper claims that the proposed design is fundamental to aligning dynamic interactions between components for stability and feasibility. Without loss of generality, we utilize a simple RLC circuit with a controllable voltage source for illustrations, which is a simplified representation of any controllable component in microgrids.
Confounder Identificationfree Causal Visual Feature Learning ; Confounders in deep learning are in general detrimental to model's generalization where they infiltrate feature representations. Therefore, learning causal features that are free of interference from confounders is important. Most previous causal learning based approaches employ backdoor criterion to mitigate the adverse effect of certain specific confounder, which require the explicit identification of confounder. However, in real scenarios, confounders are typically diverse and difficult to be identified. In this paper, we propose a novel Confounder Identificationfree Causal Visual Feature Learning CICF method, which obviates the need for identifying confounders. CICF models the interventions among different samples based on frontdoor criterion, and then approximates the globalscope intervening effect upon the instancelevel interventions from the perspective of optimization. In this way, we aim to find a reliable optimization direction, which avoids the intervening effects of confounders, to learn causal features. Furthermore, we uncover the relation between CICF and the popular metalearning strategy MAML, and provide an interpretation of why MAML works from the theoretical perspective of causal learning for the first time. Thanks to the effective learning of causal features, our CICF enables models to have superior generalization capability. Extensive experiments on domain generalization benchmark datasets demonstrate the effectiveness of our CICF, which achieves the stateoftheart performance.
Generative Modeling of Turbulence ; We present a mathematically well founded approach for the synthetic modeling of turbulent flows using generative adversarial networks GAN. Based on the analysis of chaotic, deterministic systems in terms of ergodicity, we outline a mathematical proof that GAN can actually learn to sample state snapshots form the invariant measure of the chaotic system. Based on this analysis, we study a hierarchy of chaotic systems starting with the Lorenz attractor and then carry on to the modeling of turbulent flows with GAN. As training data, we use fields of velocity fluctuations obtained from large eddy simulations LES. Two architectures are investigated in detail we use a deep, convolutional GAN DCGAN to synthesise the turbulent flow around a cylinder. We furthermore simulate the flow around a low pressure turbine stator using the pix2pixHD architecture for a conditional DCGAN being conditioned on the position of a rotating wake in front of the stator. The settings of adversarial training and the effects of using specific GAN architectures are explained. We thereby show that GAN are efficient in simulating turbulence in technically challenging flow problems on the basis of a moderate amount of training data. GAN training and inference times significantly fall short when compared with classical numerical methods, in particular LES, while still providing turbulent flows in high resolution. We furthermore analyse the statistical properties of the synthesized and LES flow fields, which agree excellently. We also show the ability of the conditional GAN to generalize over changes of geometry by generating turbulent flow fields for positions of the wake that are not included in the training data.
Bifurcated symmetry breaking in scalartensor gravity ; We present models that simultaneously predict presence of dark energy and cold dark matter along with slowroll inflation. The dark energy density is found to be of order rm a ;few ;meV4, and the mass of dark matter constituent is approx 1, meV. These numbers are given in terms of the present value of Hubble constant H0 and the Plank energy 1sqrt16 pi GN they are H0 Mrm P2 for the energy density and H0 Mrm P12 for the dark matter constituent mass. The basic framework is a multiscalar tensor gravity with nontrivial conformal coupling to the Ricci scalar curvature in the lagrangian density. The key for a right amount of dark energy is to incorporate in a novel way the spatially homogeneous kinetic contribution of NambuGoldstone modes in a spontaneously broken multiscalar field sector. Proposed theories are made consistent with general relativity tests at small cosmological distances, yet are different from general relativity at cosmological scales. Dark matter is generated as spatially inhomogeneous component of the scalar system, with roughly comparable amount to the dark energy. In some presented models a cosmological bifurcation of symmetry breaking of scalar sector is triggered by the spontaneous breaking of electroweak SU2 times U1 gauge symmetry, hence the separation occurring simultaneously at the electroweak phase transition. The best experimental method to test presented models is to search for the fifthforce type of scalar exchange interaction with a force range, O102 cm, whose coupling to matter is basically of gravitational strength.
CSG0 Continual Urban Scene Generation with Zero Forgetting ; With the rapid advances in generative adversarial networks GANs, the visual quality of synthesised scenes keeps improving, including for complex urban scenes with applications to automated driving. We address in this work a continual scene generation setup in which GANs are trained on a stream of distinct domains; ideally, the learned models should eventually be able to generate new scenes in all seen domains. This setup reflects the reallife scenario where data are continuously acquired in different places at different times. In such a continual setup, we aim for learning with zero forgetting, IE, with no degradation in synthesis quality over earlier domains due to catastrophic forgetting. To this end, we introduce a novel framework that not only i enables seamless knowledge transfer in continual training but also ii guarantees zero forgetting with a small overhead cost. While being more memory efficient, thanks to continual learning, our model obtains better synthesis quality as compared against the bruteforce solution that trains one full model for each domain. Especially, under extreme lowdata regimes, our approach outperforms the bruteforce one by a large margin.
SelfEnsembling GAN for CrossDomain Semantic Segmentation ; Deep neural networks DNNs have greatly contributed to the performance gains in semantic segmentation. Nevertheless, training DNNs generally requires large amounts of pixellevel labeled data, which is expensive and timeconsuming to collect in practice. To mitigate the annotation burden, this paper proposes a selfensembling generative adversarial network SEGAN exploiting crossdomain data for semantic segmentation. In SEGAN, a teacher network and a student network constitute a selfensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN. Despite its simplicity, we find SEGAN can significantly boost the performance of adversarial training and enhance the stability of the model, the latter of which is a common barrier shared by most adversarial trainingbased methods. We theoretically analyze SEGAN and provide an mathcal O1sqrtN generalization bound N is the training sample size, which suggests controlling the discriminator's hypothesis complexity to enhance the generalizability. Accordingly, we choose a simple network as the discriminator. Extensive and systematic experiments in two standard settings demonstrate that the proposed method significantly outperforms current stateoftheart approaches. The source code of our model is available online httpsgithub.comYonghaoXuSEGAN.
Translating Human Mobility Forecasting through Natural Language Generation ; Existing human mobility forecasting models follow the standard design of the timeseries prediction model which takes a series of numerical values as input to generate a numerical value as a prediction. Although treating this as a regression problem seems straightforward, incorporating various contextual information such as the semantic category information of each PlaceofInterest POI is a necessary step, and often the bottleneck, in designing an effective mobility prediction model. As opposed to the typical approach, we treat forecasting as a translation problem and propose a novel forecasting through a language generation pipeline. The paper aims to address the human mobility forecasting problem as a language translation task in a sequencetosequence manner. A mobilitytolanguage template is first introduced to describe the numerical mobility data as natural language sentences. The core intuition of the human mobility forecasting translation task is to convert the input mobility description sentences into a future mobility description from which the prediction target can be obtained. Under this pipeline, a twobranch network, SHIFT Translating Human Mobility Forecasting, is designed. Specifically, it consists of one main branch for language generation and one auxiliary branch to directly learn mobility patterns. During the training, we develop a momentum mode for better connecting and training the two branches. Extensive experiments on three realworld datasets demonstrate that the proposed SHIFT is effective and presents a new revolutionary approach to forecasting human mobility.
General Relativistic Shock Waves that Exhibit Cosmic Acceleration ; This paper concerns the construction and analysis of a new family of exact general relativistic shock waves. The construction resolves the open problem of determining the expanding waves created behind a shockwave explosion into a static isothermal sphere with an inverse square density and pressure profile. The construction involves matching two selfsimilar families of solutions to the perfect fluid Einstein field equations across a spherical shock surface. The matching is accomplished in Schwarzschild coordinates where the shock waves appear one derivative less regular than they actually are. Separately, both families contain singularities, but as matched shockwave solutions, they are singularity free. There was no guarantee ahead of time that the matching of the two families could be achieved within the regions where both families are nonsingular. Indeed, for pure radiation equations of state, the matching occurs near the sonic point of the interior expanding wave and this makes the analysis quite delicate, both numerically and formally. It is for this reason the construction is accompanied by a rigorous existence proof in the pure radiation case. The analysis is extended to demonstrate Lax stability in the pure radiation case and provide a criterion for Lax stability in all other cases. These shockwave solutions represent an intriguing new mechanism in General Relativity for exhibiting accelerations in asymptotically Friedmann spacetimes, analogous to the accelerations modelled by the cosmological constant in the Standard Model of Cosmology. However, unlike in the Standard Model of Cosmology, these shockwave solutions solve the Einstein field equations in the absence of a cosmological constant, opening up the question of whether a purely mathematical mechanism could account for the cosmic acceleration observed today, rather than dark energy.
Fully Convolutional Change Detection Framework with Generative Adversarial Network for Unsupervised, Weakly Supervised and Regional Supervised Change Detection ; Deep learning for change detection is one of the current hot topics in the field of remote sensing. However, most endtoend networks are proposed for supervised change detection, and unsupervised change detection models depend on traditional predetection methods. Therefore, we proposed a fully convolutional change detection framework with generative adversarial network, to conclude unsupervised, weakly supervised, regional supervised, and fully supervised change detection tasks into one framework. A basic Unet segmentor is used to obtain change detection map, an imagetoimage generator is implemented to model the spectral and spatial variation between multitemporal images, and a discriminator for changed and unchanged is proposed for modeling the semantic changes in weakly and regional supervised change detection task. The iterative optimization of segmentor and generator can build an endtoend network for unsupervised change detection, the adversarial process between segmentor and discriminator can provide the solutions for weakly and regional supervised change detection, the segmentor itself can be trained for fully supervised task. The experiments indicate the effectiveness of the propsed framework in unsupervised, weakly supervised and regional supervised change detection. This paper provides theorical definitions for unsupervised, weakly supervised and regional supervised change detection tasks, and shows great potentials in exploring endtoend network for remote sensing change detection.
Reinforcement Learning Based Query Vertex Ordering Model for Subgraph Matching ; Subgraph matching is a fundamental problem in various fields that use graph structured data. Subgraph matching algorithms enumerate all isomorphic embeddings of a query graph q in a data graph G. An important branch of matching algorithms exploit the backtracking search approach which recursively extends intermediate results following a matching order of query vertices. It has been shown that the matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms. In recent years, many advanced techniques for query vertex ordering i.e., matching order generation have been proposed to reduce the unpromising intermediate results according to the preset heuristic rules. In this paper, for the first time we apply the Reinforcement Learning RL and Graph Neural Networks GNNs techniques to generate the highquality matching order for subgraph matching algorithms. Instead of using the fixed heuristics to generate the matching order, our model could capture and make full use of the graph information, and thus determine the query vertex order with the adaptive learningbased rule that could significantly reduces the number of redundant enumerations. With the help of the reinforcement learning framework, our model is able to consider the longterm benefits rather than only consider the local information at current ordering step.Extensive experiments on six reallife data graphs demonstrate that our proposed matching order generation technique could reduce up to two orders of magnitude of query processing time compared to the stateoftheart algorithms.
Understanding Why Generalized Reweighting Does Not Improve Over ERM ; Empirical risk minimization ERM is known in practice to be nonrobust to distributional shift where the training and the test distributions are different. A suite of approaches, such as importance weighting, and variants of distributionally robust optimization DRO, have been proposed to solve this problem. But a line of recent work has empirically shown that these approaches do not significantly improve over ERM in real applications with distribution shift. The goal of this work is to obtain a comprehensive theoretical understanding of this intriguing phenomenon. We first posit the class of Generalized Reweighting GRW algorithms, as a broad category of approaches that iteratively update model parameters based on iterative reweighting of the training samples. We show that when overparameterized models are trained under GRW, the resulting models are close to that obtained by ERM. We also show that adding small regularization which does not greatly affect the empirical training accuracy does not help. Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization. Our work thus has the following sobering takeaway to make progress towards distributionally robust generalization, we either have to develop nonGRW approaches, or perhaps devise novel classificationregression loss functions that are adapted to the class of GRW approaches.