text
stringlengths
62
2.94k
DiffusionEngine Diffusion Model is Scalable Data Engine for Object Detection ; Data is the cornerstone of deep learning. This paper reveals that the recently developed Diffusion Model is a scalable data engine for object detection. Existing methods for scaling up detectionoriented data often require manual collection or generative models to obtain target images, followed by data augmentation and labeling to produce training pairs, which are costly, complex, or lacking diversity. To address these issues, we presentDiffusionEngine DE, a data scalingup engine that provides highquality detectionoriented training pairs in a single stage. DE consists of a pretrained diffusion model and an effective DetectionAdapter, contributing to generating scalable, diverse and generalizable detection data in a plugandplay manner. DetectionAdapter is learned to align the implicit semantic and location knowledge in offtheshelf diffusion models with detectionaware signals to make better boundingbox predictions. Additionally, we contribute two datasets, i.e., COCODE and VOCDE, to scale up existing detection benchmarks for facilitating followup research. Extensive experiments demonstrate that data scalingup via DE can achieve significant improvements in diverse scenarios, such as various detection algorithms, selfsupervised pretraining, datasparse, labelscarce, crossdomain, and semisupervised learning. For example, when using DE with a DINObased adapter to scale up data, mAP is improved by 3.1 on COCO, 7.6 on VOC, and 11.5 on Clipart.
When to Learn What ModelAdaptive Data Augmentation Curriculum ; Data augmentation DA is widely used to improve the generalization of neural networks by enforcing the invariances and symmetries to predefined transformations applied to input data. However, a fixed augmentation policy may have different effects on each sample in different training stages but existing approaches cannot adjust the policy to be adaptive to each sample and the training model. In this paper, we propose Model Adaptive Data Augmentation MADAug that jointly trains an augmentation policy network to teach the model when to learn what. Unlike previous work, MADAug selects augmentation operators for each input image by a modeladaptive policy varying between training stages, producing a data augmentation curriculum optimized for better generalization. In MADAug, we train the policy through a bilevel optimization scheme, which aims to minimize a validationset loss of a model trained using the policyproduced data augmentations. We conduct an extensive evaluation of MADAug on multiple image classification tasks and network architectures with thorough comparisons to existing DA approaches. MADAug outperforms or is on par with other baselines and exhibits better fairness it brings improvement to all classes and more to the difficult ones. Moreover, MADAug learned policy shows better performance when transferred to finegrained datasets. In addition, the autooptimized policy in MADAug gradually introduces increasing perturbations and naturally forms an easytohard curriculum.
Prompting4Debugging RedTeaming TexttoImage Diffusion Models by Finding Problematic Prompts ; Texttoimage diffusion models, e.g. Stable Diffusion SD, lately have shown remarkable ability in highquality content generation, and become one of the representatives for the recent wave of transformative AI. Nevertheless, such advance comes with an intensifying concern about the misuse of this generative technology, especially for producing copyrighted or NSFW i.e. not safe for work images. Although efforts have been made to filter inappropriate imagesprompts or remove undesirable conceptsstyles via model finetuning, the reliability of these safety mechanisms against diversified problematic prompts remains largely unexplored. In this work, we propose Prompting4Debugging P4D as a debugging and redteaming tool that automatically finds problematic prompts for diffusion models to test the reliability of a deployed safety mechanism. We demonstrate the efficacy of our P4D tool in uncovering new vulnerabilities of SD models with safety mechanisms. Particularly, our result shows that around half of prompts in existing safe prompting benchmarks which were originally considered safe can actually be manipulated to bypass many deployed safety mechanisms, including concept removal, negative prompt, and safety guidance. Our findings suggest that, without comprehensive testing, the evaluations on limited safe prompting benchmarks can lead to a false sense of safety for texttoimage models.
A fastrunning physicsbased wake model for a semiinfinite wind farm ; This paper presents a new generation of fastrunning physicsbased models to predict the wake of a semiinfinite wind farm, extending infinitely in the lateral direction but with finite size in the streamwise direction. The assumption of a semiinfinite wind farm enables concurrent solving of the laterallyaveraged momentum equations in both streamwise and spanwise directions. The developed model captures important physical phenomena such as vertical topdown transport of energy into the farm, variable wake recovery rate due to the farmgenerated turbulence, and also wake deflection due to turbine yaw misalignment and Coriolis force. Of special note is the model's capability to predict and shed light on the counteracting effect of Coriolis force causing wake deflections in both positive and negative directions. Moreover, the impact of windfarm layout configuration on the flow distribution is modelled through a parameter called the local deficit coefficient. Model predictions were validated against largeeddy simulations extending up to 45 kilometres downstream of wind farms. Detailed analyses were performed to study the impacts of various factors such as incoming turbulence, windfarm size, interturbine spacing, and windfarm layout on the farm wake.
Not Enough Labeled Data Just Add Semantics A DataEfficient Method for Inferring Online Health Texts ; Usergenerated texts available on the web and social platforms are often long and semantically challenging, making them difficult to annotate. Obtaining human annotation becomes increasingly difficult as problem domains become more specialized. For example, many health NLP problems require domain experts to be a part of the annotation pipeline. Thus, it is crucial that we develop lowresource NLP solutions able to work with this set of limiteddata problems. In this study, we employ Abstract Meaning Representation AMR graphs as a means to model lowresource Health NLP tasks sourced from various online health resources and communities. AMRs are well suited to model online health texts as they can represent multisentence inputs, abstract away from complex terminology, and model longdistance relationships between coreferring tokens. AMRs thus improve the ability of pretrained language models to reason about highcomplexity texts. Our experiments show that we can improve performance on 6 lowresource health NLP tasks by augmenting text embeddings with semantic graph embeddings. Our approach is task agnostic and easy to merge into any standard text classification pipeline. We experimentally validate that AMRs are useful in the modeling of complex texts by analyzing performance through the lens of two textual complexity measures the Flesch Kincaid Reading Level and Syntactic Complexity. Our error analysis shows that AMRinfused language models perform better on complex texts and generally show less predictive variance in the presence of changing complexity.
CausalDFQ Causality Guided Datafree Network Quantization ; Model quantization, which aims to compress deep neural networks and accelerate inference speed, has greatly facilitated the development of cumbersome models on mobile and edge devices. There is a common assumption in quantization methods from prior works that training data is available. In practice, however, this assumption cannot always be fulfilled due to reasons of privacy and security, rendering these methods inapplicable in reallife situations. Thus, datafree network quantization has recently received significant attention in neural network compression. Causal reasoning provides an intuitive way to model causal relationships to eliminate datadriven correlations, making causality an essential component of analyzing datafree problems. However, causal formulations of datafree quantization are inadequate in the literature. To bridge this gap, we construct a causal graph to model the data generation and discrepancy reduction between the pretrained and quantized models. Inspired by the causal understanding, we propose the Causalityguided Datafree Network Quantization method, CausalDFQ, to eliminate the reliance on data via approaching an equilibrium of causalitydriven intervened distributions. Specifically, we design a contentstyledecoupled generator, synthesizing images conditioned on the relevant and irrelevant factors; then we propose a discrepancy reduction loss to align the intervened distributions of the pretrained and quantized models. It is worth noting that our work is the first attempt towards introducing causality to datafree quantization problem. Extensive experiments demonstrate the efficacy of CausalDFQ. The code is available at httpsgithub.com42ShawnCausalDFQ.
On the Effectiveness of Adversarial Samples against Ensemble Learningbased Windows PE Malware Detectors ; Recently, there has been a growing focus and interest in applying machine learning ML to the field of cybersecurity, particularly in malware detection and prevention. Several research works on malware analysis have been proposed, offering promising results for both academic and practical applications. In these works, the use of Generative Adversarial Networks GANs or Reinforcement Learning RL can aid malware creators in crafting metamorphic malware that evades antivirus software. In this study, we propose a mutation system to counteract ensemble learningbased detectors by combining GANs and an RL model, overcoming the limitations of the MalGAN model. Our proposed FeaGAN model is built based on MalGAN by incorporating an RL model called the Deep Qnetwork antimalware Engines Attacking Framework DQEAF. The RL model addresses three key challenges in performing adversarial attacks on Windows Portable Executable malware, including format preservation, executability preservation, and maliciousness preservation. In the FeaGAN model, ensemble learning is utilized to enhance the malware detector's evasion ability, with the generated adversarial patterns. The experimental results demonstrate that 100 of the selected mutant samples preserve the format of executable files, while certain successes in both executability preservation and maliciousness preservation are achieved, reaching a stable success rate.
Aligning Large Multimodal Models with Factually Augmented RLHF ; Large Multimodal Models LMM are built across modalities and the misalignment between two modalities can result in hallucination, generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback RLHF from the text domain to the task of visionlanguage alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the visionlanguage model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and groundtruth multichoice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT4generated training data for vision instruction tuning with previously available humanwritten imagetext pairs to improve the general capabilities of our model. To evaluate the proposed approach in realworld scenarios, we develop a new evaluation benchmark MMHALBENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVABench dataset with the 94 performance level of the textonly GPT4 while previous best methods can only achieve the 87 level, and an improvement by 60 on MMHALBENCH over other baselines. We opensource our code, model, data at httpsllavarlhf.github.io.
Advances in Kidney Biopsy Structural Assessment through Dense Instance Segmentation ; The kidney biopsy is the gold standard for the diagnosis of kidney diseases. Lesion scores made by expert renal pathologists are semiquantitative and suffer from high interobserver variability. Automatically obtaining statistics per segmented anatomical object, therefore, can bring significant benefits in reducing labor and this interobserver variability. Instance segmentation for a biopsy, however, has been a challenging problem due to a the on average large number around 300 to 1000 of densely touching anatomical structures, b with multiple classes at least 3 and c in different sizes and shapes. The currently used instance segmentation models cannot simultaneously deal with these challenges in an efficient yet generic manner. In this paper, we propose the first anchorfree instance segmentation model that combines diffusion models, transformer modules, and RCNNs regional convolution neural networks. Our model is trained on just one NVIDIA GeForce RTX 3090 GPU, but can efficiently recognize more than 500 objects with 3 common anatomical object classes in renal biopsies, i.e., glomeruli, tubuli, and arteries. Our data set consisted of 303 patches extracted from 148 Jones' silverstained renal whole slide images WSIs, where 249 patches were used for training and 54 patches for evaluation. In addition, without adjustment or retraining, the model can directly transfer its domain to generate decent instance segmentation results from PASstained WSIs. Importantly, it outperforms other baseline models and reaches an AP 51.7 in detection as the new stateoftheart.
Geometry of generalized higher order fields and applications to classical linear electrodynamics ; Motivated by obtaining a consistent mathematical description for the radiation reaction of point charged particles in linear classical electrodynamics, a theory of generalized higher order tensors and differential forms is introduced. The generalization of some fundamental notions of the differential geometry and the theory of differential forms is presented. In particular, the cohomology and integration theories for generalized higher order forms are developed, including the Cartan calculus, a generalization of de Rham cohomology and a version of Thom's isomorphism theorem. We consider in detail a special type of generalized higher order tensors associated with bounded maximal nacceleration and use it as a model of spacetime. A generalization of electrodynamic theory with higher order fields is introduced. We show that combining the generalized higher order fields with maximal acceleration geometry the evolution of a point charged particle interacting with the generalized higher order fields can be described by solutions of an implicit second order ordinary differential equation. In flat space such equation is Lorentz invariant, does not have preaccelerated solutions of Dirac's type or runaway solutions, it is compatible with Newton's first law of dynamics and with the covariant Larmor's power radiation law. A generalization of the MaxwellLorentz theory is also introduced. The theory is linear in the field sector and it reduces to the standard MaxwellLorentz electrodynamics when the maximal acceleration is infinite. Finally, we discuss the assumptions of our framework in addition to some predictions of the theory.
On the polynomial identities of the algebra M11E ; Verbally prime algebras are important in PI theory. They were described by Kemer over a field K of characteristic zero 0 and KT the trivial ones, MnK, MnE, MabE. Here KT is the free associative algebra of infinite rank, with free generators T, E denotes the infinite dimensional Grassmann algebra over K, MnK and MnE are the ntimes n matrices over K and over E, respectively. The algebras MabE are subalgebras of MabE, see their definition below. The generic also called relatively free algebras of these algebras have been studied extensively. Procesi described the generic algebra of MnK and lots of its properties. Models for the generic algebras of MnE and MabE are also known but their structure remains quite unclear. In this paper we study the generic algebra of M11E in two generators, over a field of characteristic 0. In an earlier paper we proved that its centre is a direct sum of the field and a nilpotent ideal of the generic algebra, and we gave a detailed description of this centre. Those results were obtained assuming the base field infinite and of characteristic different from 2. In this paper we study the polynomial identities satisfied by this generic algebra. We exhibit a basis of its polynomial identities. It turns out that this algebra is PI equivalent to a 5dimensional algebra of certain upper triangular matrices. The identities of the latter algebra have been studied; these were described by Gordienko. As an application of our results we describe the subvarieties of the variety of unitary algebras generated by the generic algebra in two generators of M11E. Also we describe the polynomial identities in two variables of the algebra M11E.
Multipseudo Regularized Label for Generated Data in Person ReIdentification ; Sufficient training data normally is required to train deeply learned models. However, due to the expensive manual process for labelling large number of images, the amount of available training data is always limited. To produce more data for training a deep network, Generative Adversarial Network GAN can be used to generate artificial sample data. However, the generated data usually does not have annotation labels. To solve this problem, in this paper, we propose a virtual label called Multipseudo Regularized Label MpRL and assign it to the generated data. With MpRL, the generated data will be used as the supplementary of real training data to train a deep neural network in a semisupervised learning fashion. To build the corresponding relationship between the real data and generated data, MpRL assigns each generated data a proper virtual label which reflects the likelihood of the affiliation of the generated data to predefined training classes in the real data domain. Unlike the traditional label which usually is a single integral number, the virtual label proposed in this work is a set of weightbased values each individual of which is a number in 0,1 called multipseudo label and reflects the degree of relation between each generated data to every predefined class of real data. A comprehensive evaluation is carried out by adopting two stateoftheart convolutional neural networks CNNs in our experiments to verify the effectiveness of MpRL. Experiments demonstrate that by assigning MpRL to generated data, we can further improve the person reID performance on five reID datasets, i.e., Market1501, DukeMTMCreID, CUHK03, VIPeR, and CUHK01. The proposed method obtains 6.29, 6.30, 5.58, 5.84, and 3.48 improvements in rank1 accuracy over a strong CNN baseline on the five datasets respectively, and outperforms stateoftheart methods.
MetaGenerating Deep Attentive Metric for Fewshot Classification ; Learning to generate a taskaware base learner proves a promising direction to deal with fewshot learning FSL problem. Existing methods mainly focus on generating an embedding model utilized with a fixed metric eg, cosine distance for nearest neighbour classification or directly generating a linear classier. However, due to the limited discriminative capacity of such a simple metric or classifier, these methods fail to generalize to challenging cases appropriately. To mitigate this problem, we present a novel deep metric metageneration method that turns to an orthogonal direction, ie, learning to adaptively generate a specific metric for a new FSL task based on the task description eg, a few labelled samples. In this study, we structure the metric using a threelayer deep attentive network that is flexible enough to produce a discriminative metric for each task. Moreover, different from existing methods that utilize an unimodal weight distribution conditioned on labelled samples for network generation, the proposed metalearner establishes a multimodal weight distribution conditioned on crossclass sample pairs using a tailored variational autoencoder, which can separately capture the specific interclass discrepancy statistics for each class and jointly embed the statistics for all classes into metric generation. By doing this, the generated metric can be appropriately adapted to a new FSL task with pleasing generalization performance. To demonstrate this, we test the proposed method on four benchmark FSL datasets and gain surprisingly obvious performance improvement over stateoftheart competitors, especially in the challenging cases, eg, improve the accuracy from 26.14 to 46.69 in the 20way 1shot task on miniImageNet, while improve the accuracy from 45.2 to 68.72 in the 5way 1shot task on FC100. Code is available httpsgithub.comNWPUZhoufeiDAM.
GraphRegularized ManifoldAware Conditional Wasserstein GAN for Brain Functional Connectivity Generation ; Common measures of brain functional connectivity FC including covariance and correlation matrices are semipositive definite SPD matrices residing on a coneshape Riemannian manifold. Despite its remarkable success for Euclideanvalued data generation, use of standard generative adversarial networks GANs to generate manifoldvalued FC data neglects its inherent SPD structure and hence the interrelatedness of edges in real FC. We propose a novel graphregularized manifoldaware conditional Wasserstein GAN GRSPDGAN for FC data generation on the SPD manifold that can preserve the global FC structure. Specifically, we optimize a generalized Wasserstein distance between the real and generated SPD data under an adversarial training, conditioned on the class labels. The resulting generator can synthesize new SPDvalued FC matrices associated with different classes of brain networks, e.g., brain disorder or healthy control. Furthermore, we introduce additional population graphbased regularization terms on both the SPD manifold and its tangent space to encourage the generator to respect the intersubject similarity of FC patterns in the real data. This also helps in avoiding mode collapse and produces more stable GAN training. Evaluated on restingstate functional magnetic resonance imaging fMRI data of major depressive disorder MDD, qualitative and quantitative results show that the proposed GRSPDGAN clearly outperforms several stateoftheart GANs in generating more realistic fMRIbased FC samples. When applied to FC data augmentation for MDD identification, classification models trained on augmented data generated by our approach achieved the largest margin of improvement in classification accuracy among the competing GANs over baselines without data augmentation.
How Novices Use LLMBased Code Generators to Solve CS1 Coding Tasks in a SelfPaced Learning Environment ; As Large Language Models LLMs gain in popularity, it is important to understand how novice programmers use them. We present a thematic analysis of 33 learners, aged 1017, independently learning Python through 45 codeauthoring tasks using Codex, an LLMbased code generator. We explore several questions related to how learners used these code generators and provide an analysis of the properties of the written prompts and the generated code. Specifically, we explore A the context in which learners use Codex, B what learners are asking from Codex, C properties of their prompts in terms of relation to task description, language, and clarity, and prompt crafting patterns, D the correctness, complexity, and accuracy of the AIgenerated code, and E how learners utilize AIgenerated code in terms of placement, verification, and manual modifications. Furthermore, our analysis reveals four distinct coding approaches when writing code with an AI code generator AI Single Prompt, where learners prompted Codex once to generate the entire solution to a task; AI StepbyStep, where learners divided the problem into parts and used Codex to generate each part; Hybrid, where learners wrote some of the code themselves and used Codex to generate others; and Manual coding, where learners wrote the code themselves. The AI Single Prompt approach resulted in the highest correctness scores on codeauthoring tasks, but the lowest correctness scores on subsequent codemodification tasks during training. Our results provide initial insight into how novice learners use AI code generators and the challenges and opportunities associated with integrating them into selfpaced learning environments. We conclude with various signs of overreliance and selfregulation, as well as opportunities for curriculum and tool development.
On the Formulation of the Generic Supersymmetric Standard Model or Supersymmetry without R parity ; The generic supersymmetric version of the Standard Model would have the minimal list of superfields incorporating the Standard Model particles, and a Lagrangian dictated by the Standard Model gauge symmetries. To be phenomenologically viable, soft supersymmetry breaking terms have to be included. In the most popular version of the supersymmetric Standard Model, an it ad hoc discrete symmetry, called R parity, is added in by hand. While there has been a lot of various kinds of Rparity violation studies in the literature, the complete version of supersymmetry without R parity is not popularly appreciated. In this article, we present a pedagogical review of the formulation of this generic supersymmetric Standard Model and give a detailed discussion on the basic conceptual issues involved. Unfortunately, there are quite some confusing, or even plainly wrong, statements on the issues within the literature of Rparity violations. We aim at clarifying these issues here. We will first discuss our formulation, about which readers are urged to read without bias from previous acquired perspectives on the topic. Based on the formulation, we will then address the various issues . In relation to phenomenology, our review here will not go beyond treelevel mass matrices. But we will give a careful discussion of mass matrices of all the matter fields involved. Useful expressions for perturbative diagonalizations of the mass matrices at the phenomenologically interesting limit of corresponds to small neutrino masses are derived. All these expressions are given in the fully generic setting, with information on complex phases of parameters retained. Such expressions have been shown to be useful in the analyses of various phenomenological features.
Dynamical Mass Generations and Collective Excitations in the SupersymmetricNambuJonaLasinio Model and a Gauge Theory with LeftRightAsymmetric Majorana Mass Terms ; The structure of effective potential surface of the NambuJonaLasinio NJL model with rightleft asymmetric Majorana mass terms corresponds to the singleflavor typeII seesaw situation of neutrino is investigated. After the dynamical generation of Dirac mass, two collective modes appear similar to the case of ordinary NJL model, and the phase mode phason, which corresponds to majoron or pion at vanishing Majorana mass parameters, has an excitation mass. The mechanism of generation of phason as a pseudo NambuGoldstone boson is examined by a mathematical manner, summarized into a theorem claims as the generalized NambuGoldstone theorem. The mass of phason is also evaluated in a supersymmetric version of the NJLtype model, and phason mass takes the order of that of axion commonly accepted today. An SU2cgauge model is constructed for the context of neutrino seesaw mechanism, and the SchwingerDyson equation of dynamical mass functions is examined. Several physical implications such as decay modes of phason, a nonlinear sigma model for phason are given. It is proposed that the methodresult of this paper can be applied to an understanding on the origin of the KobayashiMaskawa matrix.
Heavyflavour tagging and the supersymmetry reach of the CERN Large Hadron Collider ; The branching fraction for the decays of gluinos to third generation quarks is expected to be enhanced in classes of supersymmetric models where either third generation squarks are lighter than other squarks, or in mixedhiggsino dark matter models constructed to be in concordance with the measured density of cold dark matter. In such scenarios, gluino production events at the CERN Large Hadron Collider should be rich in top and bottom quark jets. Requiring bjets in addition to missing transverse energy should, therefore, enhance the supersymmetry signal relative to Standard Model backgrounds from V jet, VV and QCD backgrounds VW, Z. We quantify the increase in the supersymmetry reach of the LHC from btagging in a variety of wellmotivated models of supersymmetry. We also explore toptagging'' at the LHC. We find that while the efficiency for this turns out to be too low to give an increase in reach beyond that obtained via btagging, toptagging can indeed provide a confirmatory signal if gluinos are not too heavy. Finally, we explore the prospects for detecting the direct production of third generation squarks in models with an inverted squark mass hierarchy. This is signalled by bjets missing transverse energy events harder than in the Standard Model, but softer than those from the production of gluinos and heavier squarks. We find that while these events can be readily separated from SM background for third generation squark masses 300500 GeV, the contamination from the much heavier gluinos and squarks remains formidable if these are also accessible.
Storm fronts over galaxy discs Models of how waves generate extraplanar gas and its anomalous kinematics ; The existence of partially ionized, diffuse gas and dust clouds at kiloparsec scale distances above the central planes of edgeon, galaxy discs was an unexpected discovery about 20 yrs ago. Subsequent observations showed that this EDIG extended or extraplanar diffuse interstellar gas has rotation velocities approximately 1020 lower than those in the central plane, and have been hard to account for. Here we present results of hydrodynamic models, with radiative cooling and heating from star formation. We find that in models with star formation generated stochastically across the disc an extraplanar gas layer is generated as long as the star formation is sufficiently strong. However, this gas rotates at nearly the same speed as the midplane gas. We then studied a range of models with imposed spiral or bar waves in the disc. EDIG layers were also generated in these models, but primarily over the wave regions, not over the entire disc. Because of this partial coverage, the EDIG clouds move radially, as well as vertically, with the result that observed kinematic anomalies are reproduced. The implication is that the kinematic anomalies are the result of threedimensional motions when the cylindrical symmetry of the disc is broken. Thus, the kinematic anomalies are the result of bars or strong waves, and more faceon galaxies with such waves should have an asymmetric EDIG component. The models also indicate that the EDIG can contain a significant fraction of cool gas, and that some star formation can be triggered at considerable heights above the disc midplane. We expect all of these effects to be more prominent in young, forming discs, to play a role in rapidly smoothing disc asymmetries, and in working to selfregulate disc structure.
Accelerating dark energy models with anisotropic fluid in Bianchi typeVI0 spacetime ; Motivated by the increasing evidence for the need of a geometry that resembles Bianchi morphology to explain the observed anisotropy in the WMAP data, we have discussed some features of the Bianchi typeVI0 universes in the presence of a fluid that wields an anisotropic equation of state EoS parameter in general relativity. We present two accelerating dark energy DE models with an anisotropic fluid in Bianchi typeVI0 spacetime. To prevail the deterministic solution we choose the scale factor at sqrttnet, which yields a timedependent deceleration parameter DP, representing a class of models which generate a transition of the universe from the early decelerating phase to the recent accelerating phase. Under the suitable condition, the anisotropic models approach to isotropic scenario. The EoS for dark energy omega is found to be timedependent and its existing range for derived models is in good agreement with the recent observations of SNe Ia data Knop et al. 2003, SNe Ia data with CMBR anisotropy and galaxy clustering statistics Tegmark et al. 2004 and latest combination of cosmological datasets coming from CMB anisotropies, luminosity distances of high redshift type Ia supernovae and galaxy clustering Hinshaw et al. 2009; Komatsu et al. 2009. For different values of n, we can generate a class of physically viable DE models.The cosmological constant Lambda is found to be a positive decreasing function of time and it approaches to a small positive value at late time i.e. the present epoch which is corroborated by results from recent type Ia supernovae observations. We also observe that our solutions are stable. The physical and geometric aspects of both the models are also discussed in detail.
Thermodynamic Bethe ansatz for nonequilibrium steady states exact energy current and fluctuations in integrable QFT ; We evaluate the exact energy current and scaled cumulant generating function related to the largedeviation function in nonequilibrium steady states with energy flow, in any integrable model of relativistic quantum field theory IQFT with diagonal scattering. Our derivations are based on various recent results of D. Bernard and B. Doyon. The steady states are built by connecting homogeneously two infinite halves of the system thermalized at different temperatures Tl, Tr, and waiting for a long time. We evaluate the current JTl,Tr using the exact QFT density matrix describing these nonequilibrium steady states and using Al.B. Zamolodchikov's method of the thermodynamic Bethe ansatz TBA. The scaled cumulant generating function is obtained from the extended fluctuation relations which hold in integrable models. We verify our formula in particular by showing that the conformal field theory CFT result is obtained in the hightemperature limit. We analyze numerically our nonequilibrium steadystate TBA equations for three models the sinhGordon model, the roaming trajectories model, and the sineGordon model at a particular reflectionless point. Based on the numerics, we conjecture that an infinite family of nonequilibrium cfunctions, associated to the scaled cumulants, can be defined, which we interpret physically. We study the full scaled distribution function and find that it can be described by a set of independent Poisson processes. Finally, we show that the additivity property of the current, which is known to hold in CFT and was proposed to hold more generally, does not hold in general IQFT, that is JTl,Tr is not of the form fTlfTr.
Effects of anisotropy on gravitational infall in galaxy clusters using an exact general relativistic model ; We study the effects and implications of anisotropies at the scale of galaxy clusters by building an exact general relativistic model of a cluster using the inhomogeneous and anisotropic Szekeres metric. The model is built from a modified NavarroFrenkWhite NFW density profile. We compare this to a corresponding spherically symmetric structure in the LemaitreTolman LT model and quantify the impact of introducing varying levels of anisotropy. We examine two physical measures of gravitational infall the growth rate of density and the velocity of the source dust in the model. We introduce a generalization of the LT dust velocity profile for the Szekeres metric and demonstrate its consistency with the growth rate of density. We find that the growth rate of density in one substructure increases by 0.5, 1.5, and 3.75 for 5, 10, and 15 levels of introduced anisotropy, which is measured as the fractional displaced mass relative to the spherically symmetric case. The infall velocity of the dust is found to increase by 2.5, 10, and 20 kms 0.5, 2, and 4.5, respectively, for the same three levels of anisotropy. This response to the anisotropy in a structure is found to be strongly nonlinear with respect to the strength of anisotropy. These relative velocities correspond to an equivalent increase in the total mass of the spherically symmetric structure of 1, 3.8, and 8.4, indicating that not accounting for the presence of anisotropic mass distributions in cluster models can strongly bias the determination of physical properties like the total mass.
Sector Models A Toolkit for Teaching General Relativity. Part 1 Curved Spaces and Spacetimes ; Teaching the general theory of relativity to high school or undergraduate students must be based on an approach that is conceptual rather than mathematical. In this paper we present such an approach that requires no more than elementary mathematics. The central idea of this introduction to general relativity is the use of socalled sector models. Sector models describe curved spaces the Regge calculus way by subdivision into blocks with euclidean geometry. This procedure is similar to the approximation of a curved surface by flat triangles. We outline a workshop for high school and undergraduate students that introduces the notion of curved space by means of sector models of black holes. We further describe the extension to sector models of curved spacetimes. The spacetime models are suitable for learners with a basic knowledge of special relativity. For online teaching materials, see httpwww.spacetimetravel.org . Fur die Vermittlung der Allgemeinen Relativitatstheorie in der Schule, im Grund oder im Nebenfachstudium besteht das Anliegen, eine fachlich befriedigende Darstellung zu geben, die nicht mehr als Schulmathematik voraussetzt. Wir stellen in diesem Beitrag einen solchen Zugang vor. Das zentrale Werkzeug unserer Einfuhrung sind sogenannte Sektormodelle, die gekrummte Raume im Sinne des ReggeKalkuls durch eine Zerlegung in kleine, ungekrummte Sektoren beschreiben, ahnlich der Triangulierung einer gekrummten Flache. Wir schildern einen Workshop fur Schulerinnen und Studierende, in dem gekrummte Raume anhand von Sektormodellen Schwarzer Locher eingefuhrt werden. Wir beschreiben ferner die Erweiterung auf Sektormodelle gekrummter Raumzeiten. Raumzeitliche Sektormodelle setzen Grundkenntnisse der Speziellen Relativitatstheorie voraus. OnlineMaterialien unter httpwww.tempolimitlichtgeschwindigkeit.de .
Rossby and Drift Wave Turbulence and Zonal Flows the CharneyHasegawaMima model and its extensions ; A detailed study of the CharneyHasegawaMima model and its extensions is presented. These simple nonlinear partial differential equations suggested for both Rossby waves in the atmosphere and also drift waves in a magneticallyconfined plasma exhibit some remarkable and nontrivial properties, which in their qualitative form survive in more realistic and complicated models, and as such form a conceptual basis for understanding the turbulence and zonal flow dynamics in real plasma and geophysical systems. Two idealised scenarios of generation of zonal flows by smallscale turbulence are explored a modulational instability and turbulent cascades. A detailed study of the generation of zonal flows by the modulational instability reveals that the dynamics of this zonal flow generation mechanism differ widely depending on the initial degree of nonlinearity. A numerical proof is provided for the extra invariant in Rossby and drift wave turbulence zonostrophy and the invariant cascades are shown to be characterised by the zonostrophy pushing the energy to the zonal scales. A small scale instability forcing applied to the model demonstrates the wellknown drift wave zonal flow feedback loop in which the turbulence which initially leads to the zonal flow creation, is completely suppressed and the zonal flows saturate. The turbulence spectrum is shown to diffuse in a manner which has been mathematically predicted. The insights gained from this simple model could provide a basis for equivalent studies in more sophisticated plasma and geophysical fluid dynamics models in an effort to fully understand the zonal flow generation, the turbulent transport suppression and the zonal flow saturation processes in both the plasma and geophysical contexts as well as other wave and turbulence systems where order evolves from chaos.
Limit theorems of a twophase quantum walk with one defect ; We treat a position dependent quantum walk QW on the line which we assign two different timeevolution operators to positive and negative parts respectively. We call the model the twophase QW here, which has been expected to be a mathematical model of the topological insulator. We obtain the stationary and timeaveraged limit measures related to localization for the twophase QW with one defect. This is the first result on localization for the twophase QW. The analytical methods are mainly based on the splitted generating function of the solution for the eigenvalue problem, and the generating function of the weight of the passages of the model. In this paper, we call the methods the splitted generating function method and the generating function method, respectively. The explicit expression of the stationary measure is asymmetric for the origin, and depends on the initial state and the choice of the parameters of the model. On the other hand, the timeaveraged limit measure has a starting point symmetry and localization effect heavily depends on the initial state and the parameters of the model. Regardless of the strong effect of the initial state and the parameters, the timeaveraged limit measure also suggests that localization can be always observed for our twophase QW. Furthermore, our results imply that there is an interesting relation between the stationary and timeaveraged limit measures when the parameters of the model have specific periodicities, which suggests that there is a possibility that we can analyze localization of the twophase QW with one defect from the stationary measure.
Two Dimensional CoreCollapse Supernova Explosions Aided by General Relativity with Multidimensional Neutrino Transport ; We present results from simulations of corecollapse supernovae in FLASH using a newlyimplemented multidimensional neutrino transport scheme and a newlyimplemented general relativistic GR treatment of gravity. We use a twomoment method with an analytic closure socalled M1 transport for the neutrino transport. This transport is multienergy, multispecies, velocitydependent and truly multidimensional, i.e., we do not assume the commonly used raybyray approximation. Our GR gravity is implemented in our Newtonian hydrodynamics simulations via an effective relativistic potential GREP that closely reproduces the GR structure of neutron stars and has been shown to match GR simulations of core collapse quite well. In axisymmetry, we simulate corecollapse supernovae with four different progenitor models in both Newtonian and GR gravity. We find that the more compact protoneutron star structure realized in simulations with GR gravity gives higher neutrino luminosities and higher neutrino energies. These differences in turn give higher neutrino heating rates upwards of sim2030 over the corresponding Newtonian gravity simulations that increase the efficacy of the neutrino mechanism. Three of the four models successfully explode in the simulations assuming GREP gravity. In our Newtonian gravity simulations, two of the four models explode, but at times much later than observed in our GR gravity simulations. Our results, both in Newtonian and GR gravity, compare well with several other studies in the literature. These results conclusively show that the approximation of Newtonian gravity for simulating the corecollapse supernova central engine is not acceptable. We also simulate four additional models in GR gravity to highlight the growing disparity between parameterized 1D models of corecollapse supernovae and the current generation of 2D models.
A Zprime Model for bto s ellbar ell Flavour Anomalies ; We study the implications of flavourchanging neutral currents FCNC's in a model with the SU2ltimes SU2htimes U1Y electroweak gauge symmetry for several anomalies appearing in bto s ellbar ell induced B decays in LHCb data. In this model, SU2l and SU2h govern the lefthanded fermions in the first two generations and the third generation, respectively. The physical Z and Z' generate the bto s transition at tree level, leading to additional contributions to the b to s semileptonic operators cal O9,10. We find that although Bsbar Bs mixing constrains the parameters severely, the model can produce values of cal Crm NP9,10 in the range determined by DescotesGenon it et. al. in Ref.citeDescotesGenon2015uva for this scenario to improve the global fit of observables in decays induced by the bto s mu bar mu transition. The Z' boson in this model also generates treelevel FCNC's for the leptonic interactions that can accommodate the experimental central value of RK cal BBto K mu bar mucal BBto K ebar e0.75. In this case, the model predicts sizeable branching ratios for Bto K e bar tau, Bto K tau bar e, and an enhancement of Bto K tau bar tau with respect to its SM value.
A Numerical Relativity Waveform Surrogate Model for Generically Precessing Binary Black Hole Mergers ; A generic, noneccentric binary black hole BBH system emits gravitational waves GWs that are completely described by 7 intrinsic parameters the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all 7 dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ell leq 4 modes, begins about 20 orbits before merger, and can be evaluated in sim50,mathrmms. We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that would otherwise require millions of numerical relativity waveforms, such as parameter inference and tests of general relativity with GW observations.
Generative Model with Coordinate Metric Learning for Object Recognition Based on 3D Models ; Given large amount of real photos for training, Convolutional neural network shows excellent performance on object recognition tasks. However, the process of collecting data is so tedious and the background are also limited which makes it hard to establish a perfect database. In this paper, our generative model trained with synthetic images rendered from 3D models reduces the workload of data collection and limitation of conditions. Our structure is composed of two subnetworks semantic foreground object reconstruction network based on Bayesian inference and classification network based on multitriplet cost function for avoiding overfitting problem on monotone surface and fully utilizing pose information by establishing spherelike distribution of descriptors in each category which is helpful for recognition on regular photos according to poses, lighting condition, background and category information of rendered images. Firstly, our conjugate structure called generative model with metric learning utilizing additional foreground object channels generated from Bayesian rendering as the joint of two subnetworks. Multitriplet cost function based on poses for object recognition are used for metric learning which makes it possible training a category classifier purely based on synthetic data. Secondly, we design a coordinate training strategy with the help of adaptive noises acting as corruption on input images to help both subnetworks benefit from each other and avoid inharmonious parameter tuning due to different convergence speed of two subnetworks. Our structure achieves the state of the art accuracy of over 50 on ShapeNet database with data migration obstacle from synthetic images to real photos. This pipeline makes it applicable to do recognition on real images only based on 3D models.
Twisted Fracton Models in Three Dimensions ; We study novel threedimensional gapped quantum phases of matter which support quasiparticles with restricted mobility, including immobile fracton excitations. So far, most existing fracton models may be instructively viewed as generalized Abelian lattice gauge theories. Here, by analogy with DijkgraafWitten topological gauge theories, we discover a natural generalization of fracton models, obtained by twisting the gauge symmetries. Introducing generalized gauge transformation operators carrying an extra phase factor depending on local configurations, we construct a plethora of exactly solvable threedimensional models, which we dub twisted fracton models. A key result of our approach is to demonstrate the existence of rich nonAbelian fracton phases of distinct varieties in a threedimensional system with finiterange interactions. For an accurate characterization of these novel phases, the notion of being inextricably nonAbelian is introduced for fractons and quasiparticles with onedimensional mobility, referring to their new behavior of displaying braiding statistics that is, and remains, nonAbelian regardless of which quasiparticles with higher mobility are added to or removed from them. We also analyze these models by embedding them on a threetorus and computing their ground state degeneracies, which exhibit a surprising and novel dependence on the system size in the nonAbelian fracton phases. Moreover, as an important advance in the study of fracton order, we develop a general mathematical framework which systematically captures the fusion and braiding properties of fractons and other quasiparticles with restricted mobility.
Laboratory Photoionization Fronts in Nitrogen Gas A Numerical Feasibility and Parameter Study ; Photoionization fronts play a dominant role in many astrophysical situations, but remain difficult to achieve in a laboratory experiment. We present the results from a computational parameter study evaluating the feasibility of the photoionization experiment presented in the design paper by Drake, R. P., Hazak, G., Keiter, P. A., Davis, J. S., Patterson, C. R., Frank, A., Blackman, E. G., Busquet, M. 2016, ApJ, 833, 249 in which a photoionization front is generated in a nitrogen medium . The nitrogen gas density and the Planckian radiation temperature of the xray source define each simulation. Simulations modeled experiments in which the xray flux is generated by a laserheated gold foil, suitable for experiments using many kJ of laser energy, and experiments in which the flux is generated by a zpinch device, which implodes a cylindrical shell of conducting wires. The models are run using CRASH, our blockadaptivemesh code for multimaterial radiation hydrodynamics. The radiative transfer model uses multigroup, fluxlimited diffusion with thirty radiation groups. In addition, electron heat conduction is modeled using a singlegroup, fluxlimited diffusion. In the theory, a photoionization front can exist only when the ratios of the electron recombination rate to the photoionization rate and the electron impact ionization rate to the recombination rate lie in certain ranges. These ratios are computed for several ionization states of nitrogen. Photoionization fronts are found to exist for laser driven models with moderate nitrogen densities sim1021 cm3 and radiation temperatures above 90 eV. For zpinch driven models, lower nitrogen densities are preferred 1021 cm3. We conclude that the proposed experiments are likely to generate photoionization fronts.
Hidden Integrality and Semirandom Robustness of SDP Relaxation for SubGaussian Mixture Model ; We consider the problem of estimating the discrete clustering structures under the SubGaussian Mixture Model. Our main results establish a hidden integrality property of a semidefinite programming SDP relaxation for this problem while the optimal solution to the SDP is not integervalued in general, its estimation error can be upper bounded by that of an idealized integer program. The error of the integer program, and hence that of the SDP, are further shown to decay exponentially in the signaltonoise ratio. In addition, we show that the SDP relaxation is robust under the semirandom setting in which an adversary can modify the data generated from the mixture model. In particular, we generalize the hidden integrality property to the semirandom model and thereby show that SDP achieves the optimal error bound in this setting. These results together highlight the globaltolocal mechanism that drives the performance of the SDP relaxation. To the best of our knowledge, our result is the first exponentially decaying error bound for convex relaxations of mixture models. A corollary of our results shows that in certain regimes the SDP solutions are in fact integral and exact. More generally, our results establish sufficient conditions for the SDP to correctly recover the cluster memberships of 1delta fraction of the points for any deltain0,1. As a special case, we show that under the ddimensional Stochastic Ball Model, SDP achieves nontrivial sometimes exact recovery when the center separation is as small as sqrt1d, which improves upon previous exact recovery results that require constant separation.
Network Traffic Anomaly Detection Using Recurrent Neural Networks ; We show that a recurrent neural network is able to learn a model to represent sequences of communications between computers on a network and can be used to identify outlier network traffic. Defending computer networks is a challenging problem and is typically addressed by manually identifying known malicious actor behavior and then specifying rules to recognize such behavior in network communications. However, these rulebased approaches often generalize poorly and identify only those patterns that are already known to researchers. An alternative approach that does not rely on known malicious behavior patterns can potentially also detect previously unseen patterns. We tokenize and compress netflow into sequences of words that form sentences representative of a conversation between computers. These sentences are then used to generate a model that learns the semantic and syntactic grammar of the newly generated language. We use LongShortTerm Memory LSTM cell Recurrent Neural Networks RNN to capture the complex relationships and nuances of this language. The language model is then used predict the communications between two IPs and the prediction error is used as a measurement of how typical or atyptical the observed communication are. By learning a model that is specific to each network, yet generalized to typical computertocomputer traffic within and outside the network, a language model is able to identify sequences of network activity that are outliers with respect to the model. We demonstrate positive unsupervised attack identification performance AUC 0.84 on the ISCX IDS dataset which contains seven days of network activity with normal traffic and four distinct attack patterns.
Modeling accretion disk emission with generalized temperature profile and its effect on AGN spectral energy distribution ; The broadband spectral energy distribution SED of Active Galactic Nuclei AGN is investigated for a wellselected sample composed of 23 Seyfert 1 galaxies observed simultaneously in the opticalUV and Xray bands with the Neil Gehrels it Swift Observatory. The optical to UV continuum spectra are modeled, for the first time, with emission from an accretion disk with a generalized radial temperature profile, in order to account for the intrinsic spectra which are found to be generally redder than the model prediction of the standard ShakuraSunyaev disk SSD Fnuproptonu13. The powerlaw indices of the radial temperature profile Trm effRpropto Rp, R is the radius of the accretion disk are inferred to be p0.5 0.75 a median of 0.63, deviating from the canonical p0.75 for the SSD model as widely adopted in previous studies. A marginal correlation of a flatter radial temperature profile a smaller p value with increasing the Eddington ratio is suggested. Such a model produces generally a lower peak of accretion disk emission and thus a smaller bolometric luminosity in some of the AGN, particularly those with high Eddington ratios, than that based on the SSD model by a factor of several. The broadband SED, the bolometric correction factors and their dependence on some of the AGN parameters are revisited. We suggest that such nonstandard SSD disks may operate in AGN and are at least partly responsible for the reddened opticalUV spectra as observed. One possible explanation for these flattened temperature profiles is the mass loss process in form of disk windsoutflows.
First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole ; We present measurements of the properties of the central radio source in M87 using Event Horizon Telescope data obtained during the 2017 campaign. We develop and fit geometric crescent models asymmetric rings with interior brightness depressions using two independent sampling algorithms that consider distinct representations of the visibility data. We show that the crescent family of models is statistically preferred over other comparably complex geometric models that we explore. We calibrate the geometric model parameters using general relativistic magnetohydrodynamic GRMHD models of the emission region and estimate physical properties of the source. We further fit images generated from GRMHD models directly to the data. We compare the derived emission region and black hole parameters from these analyses with those recovered from reconstructed images. There is a remarkable consistency among all methods and data sets. We find that 50 of the total flux at arcsecond scales comes from near the horizon, and that the emission is dramatically suppressed interior to this region by a factor 10, providing direct evidence of the predicted shadow of a black hole. Across all methods, we measure a crescent diameter of 423 microas and constrain its fractional width to be 0.5. Associating the crescent feature with the emission surrounding the black hole shadow, we infer an angular gravitational radius of GMDc2 3.8 0.4 microas. Folding in a distance measurement of 16.80.8,0.7 Mpc gives a black hole mass of M 6.5 0.2stat 0.7sys 109 Msun. This measurement from lensed emission near the event horizon is consistent with the presence of a central Kerr black hole, as predicted by the general theory of relativity.
PowerLaw Population Heterogeneity Governs Epidemic Waves ; We generalize the SusceptibleInfectedRemoved model for epidemics to take into account generic effects of heterogeneity in the degree of susceptibility to infection in the population. We introduce a single new parameter corresponding to a powerlaw exponent of the susceptibility distribution that characterizes the population heterogeneity. We show that our generalized model is as simple as the original model which is contained as a limiting case. Because of this simplicity, numerical solutions can be generated easily and key properties of the epidemic wave can still be obtained exactly. In particular, we present exact expressions for the herd immunity level, the final size of the epidemic, as well as for the shape of the wave and for observables that can be quantified during an epidemic. We find that in strongly heterogeneous populations the epidemic reaches only a small fraction of the population. This implies that the herd immunity level can be much lower than in commonly used models with homogeneous populations. Using our model to analyze data for the SARSCoV2 epidemic in Germany shows that the reported time course is consistent with several scenarios characterized by different levels of immunity. These scenarios differ in population heterogeneity and in the time course of the infection rate, for example due to mitigation efforts or seasonality. Our analysis reveals that quantifying the effects of mitigation requires knowledge on the degree of heterogeneity in the population. Our work shows that key effects of population heterogeneity can be captured without increasing the complexity of the model. We show that information about population heterogeneity will be key to understand how far an epidemic has progressed and what can be expected for its future course.
A Classical Analogue to the Standard Model, Chapters 410 Particle generations and masses; curved spacetimes and gravitation; heavy weak bosons ; The mathbbCwedge 18 analogue model contains counterparts to the particle spectrum and interactions of the Standard Model, and has only three tunable parameters. As the structure of this model is highly constrained, predictive relationships between constants may be obtained. In Chapters 46, the masses of the tau, the W and Z bosons, and a Higgslike scalar boson are calculated as functions of alpha, me, and mmu. They are shown to be 1.77686741043 GeVc2, 80.358722 GeVc2, 91.187735 GeVc2, and 125.250149 GeVc2 respectively, with no free fitting parameters. All are within 0.1,sigma of the observed values of 1.7768612 GeVc2, 80.36016 GeVc2, 91.187621 GeVc2, and 125.2517 GeVc2 respectively. In Chapter 7 the final ungauged freedom of the mathbbCwedge 18 model is used to eliminate the righthanded weak interaction, while simultaneously introducing spacetime curvature and a gravitational interaction emulating general relativity. The value of Newton's constant is then calculated from alpha, me, and mmu, yielding GN6.67428230times 1011mathrmm3mathrmkg1mathrms2, which is in agreement with the observed value of GN6.6743015times 1011mathrmm3mathrmkg1mathrms2 with tension less than 0.1,sigmamathrmexp. This relentless consistency with experiment suggests the existence of a unifying relationship between lepton generations, gravitation, and the electroweak mass scale. In the Classical Analogue to the Standard Model this unification arises from an underlying construction from coloured preons, with the lowenergy residuals of the preon binding interactions corresponding to the strong nuclear force.
Steady state entropy production rate for scalar Langevin field theories ; The entropy production rate EPR offers a quantitative measure of time reversal symmetry breaking in nonequilibrium systems. It can be defined either at particle level or at the level of coarsegrained fields such as density; the EPR for the latter quantifies the extent to which these coarsegrained fields behave irreversibly. In this work, we first develop a general method to compute the EPR of scalar Langevin field theories with additive noise. This large class of theories includes active versions of Model A nonconserved density dynamics and Model B conserved and also models where both types of dynamics are simultaneously present such as Model AB. Treating the scalar field phi and its time derivative dotphi as the sole observables, we arrive at an expression for the EPR that is nonnegative for every field configuration and is quadratic in the timeantisymmetric component of the dynamics. Our general expression is a function of the quasipotential, which determines the full probability distribution for configurations, and is not generally calculable. To alleviate this difficulty, we present a smallnoise expansion of the EPR, which only requires knowledge of the deterministic meanfield solution for the scalar field in steady state, which generally is calculable, at least numerically. We demonstrate this calculation for the case of Model AB. We then present a similar EPR calculation for Model AB with the conservative and nonconservative contributions to dotphi dotphirm A dotphirm B viewed as separately observable quantities. The results are qualitatively different, confirming that the fieldlevel EPR depends on the choice of coarsegrained information retained within the dynamical description.
Revisiting Analog OvertheAir Machine Learning The Blessing and Curse of Interference ; We study a distributed machine learning problem carried out by an edge server and multiple agents in a wireless network. The objective is to minimize a global function that is a sum of the agents' local loss functions. And the optimization is conducted by analog overtheair model training. Specifically, each agent modulates its local gradient onto a set of waveforms and transmits to the edge server simultaneously. From the received analog signal the edge server extracts a noisy aggregated gradient which is distorted by the channel fading and interference, and uses it to update the global model and feedbacks to all the agents for another round of local computing. Since the electromagnetic interference generally exhibits a heavytailed intrinsic, we use the alphastable distribution to model its statistic. In consequence, the global gradient has an infinite variance that hinders the use of conventional techniques for convergence analysis that rely on secondorder moments' existence. To circumvent this challenge, we take a new route to establish the analysis of convergence rate, as well as generalization error, of the algorithm. Our analyses reveal a twosided effect of the interference on the overall training procedure. On the negative side, heavy tail noise slows down the convergence rate of the model training the heavier the tail in the distribution of interference, the slower the algorithm converges. On the positive side, heavy tail noise has the potential to increase the generalization power of the trained model the heavier the tail, the better the model generalizes. This perhaps counterintuitive conclusion implies that the prevailing thinking on interference that it is only detrimental to the edge learning system is outdated and we shall seek new techniques that exploit, rather than simply mitigate, the interference for better machine learning in wireless networks.
Phenomenology of dark energy exploring the space of theories with future redshift surveys ; We use the effective field theory of dark energy to explore the space of modified gravity models which are capable of driving the present cosmic acceleration. We identify five universal functions of cosmic time that are enough to describe a wide range of theories containing a single scalar degree of freedom in addition to the metric. The first function the effective equation of state uniquely controls the expansion history of the universe. The remaining four functions appear in the linear cosmological perturbation equations, but only three of them regulate the growth history of large scale structures. We propose a specific parameterization of such functions in terms of characteristic coefficients that serve as coordinates in the space of modified gravity theories and can be effectively constrained by the next generation of cosmological experiments. We address in full generality the problem of the soundness of the theory against ghostlike and gradient instabilities and show how the space of nonpathological models shrinks when a more negative equation of state parameter is considered. This analysis allows us to locate a large class of stable theories that violate the null energy condition i.e. superacceleration models and to recover, as particular subsets, various models considered so far. Finally, under the assumption that the true underlying cosmological model is the Lambda Cold Dark Matter LambdaCDM scenario, and relying on the figure of merit of EUCLIDlike observations, we demonstrate that the theoretical requirement of stability significantly narrows the empirical likelihood, increasing the discriminatory power of data. We also find that the vast majority of these nonpathological theories generating the same expansion history as the LambdaCDM model predict a different, lower, growth rate of cosmic structures.
Timedependent level crossing models solvable in terms of the confluent Heun functions ; We discuss the levelcrossing field configurations for which the quantum timedependent twostate problem is solvable in terms of the confluent Heun functions. We show that these configurations belong to fifteen fourparametric families of models that generalize all the known 3 and 2parametric families for which the problem is solvable in terms of the Gauss hypergeometric and the Kummer confluent hypergeometric functions. Analyzing the general case of variable Rabi frequency and frequency detuning we mention that the most notable features of the models provided by the derived classes are due to the extra constant term in the detuning modulation function. Due to this term the classes suggest numerous symmetric or asymmetric chirped pulses and a variety of models with two crossings of the frequency resonance. The latter models are generated by both real and complex transformations of the independent variable. In general, the resulting detuning functions are asymmetric, the asymmetry being controlled by the parameters of the detuning modulation function. In some cases, however, the asymmetry may be additionally caused by the amplitude modulation function. We present an example of the latter possibility and additionally mention a constant amplitude model with periodically repeated resonancecrossings. Finally, we discuss the excitation of a twolevel atom by a pulse of Lorentzian shape with a detuning providing one or two crossings of the resonance. Using a series expansion of the solution of the confluent Heun equation in terms of the Kummer hypergeometric functions we derive particular closed form solutions of the twostate problem for this field configuration. The particular sets of the involved parameters for which these solutions are obtained define curves in the 3D space of the involved parameters belonging to the complete return spectrum of the considered twostate quantum system.
Quantum geometry from higher gauge theory ; Higher gauge theories play a prominent role in the construction of 4d topological invariants and have been long ago proposed as a tool for 4d quantum gravity. The Yetter lattice model and its continuum counterpart, the BFCG theory, generalize BF theory to 2gauge groups and when specialized to 4d and the Poincar'e 2group they provide an exactly solvable topologicallyflat version of 4d general relativity. The 2Poincar'e Yetter model was conjectured to be equivalent to a state sum model of quantum flat spacetime developed by Baratin and Freidel after work by Korepanov KBF model. This conjecture was motivated by the origin of the KBF model in the theory of 2representations of the Poincar'e 2group. Its proof, however, has remained elusive due to the lack of a generalized PeterWeyl theorem for 2groups. In this work we prove this conjecture. Our proof avoids the PeterWeyl theorem and rather leverages the geometrical content of the Yetter model. Key for the proof is the introduction of a kinematical boundary Hilbert space on which 1 and 2Lorentz invariance is imposed. Geometrically this allows the identification of quantum tetrad variables and of the associated quantum LeviCivita connection. States in this Hilbert space are labelled by quantum numbers that match the 2group representation labels. Our results open exciting opportunities for the construction of new representations of quantum geometries. Compared to loop quantum gravity, the higher gauge theory framework provides a quantum representation of the ADMRegge initial data, including an identification of the intrinsic and extrinsic curvature. Furthermore, it leads to a version of the diffeomorphism and Hamiltonian constraints that acts on the vertices of the discretization, thus providing a prospect for a quantum realization of the hypersurface deformation algebra in 4d.
Can AI Generate Love Advice Toward Neural Answer Generation for NonFactoid Questions ; Deep learning methods that extract answers for nonfactoid questions from QA sites are seen as critical since they can assist users in reaching their next decisions through conversations with AI systems. The current methods, however, have the following two problems 1 They can not understand the ambiguous use of words in the questions as word usage can strongly depend on the context. As a result, the accuracies of their answer selections are not good enough. 2 The current methods can only select from among the answers held by QA sites and can not generate new ones. Thus, they can not answer the questions that are somewhat different with those stored in QA sites. Our solution, Neural Answer Construction Model, tackles these problems as it 1 Incorporates the biases of semantics behind questions into word embeddings while also computing them regardless of the semantics. As a result, it can extract answers that suit the contexts of words used in the question as well as following the common usage of words across semantics. This improves the accuracy of answer selection. 2 Uses biLSTM to compute the embeddings of questions as well as those of the sentences often used to form answers. It then simultaneously learns the optimum combination of those sentences as well as the closeness between the question and those sentences. As a result, our model can construct an answer that corresponds to the situation that underlies the question; it fills the gap between answer selection and generation and is the first model to move beyond the current simple answer selection model for nonfactoid QAs. Evaluations using datasets created for love advice stored in the Japanese QA site, Oshiete goo, indicate that our model achieves 20 higher accuracy in answer creation than the strong baselines. Our model is practical and has already been applied to the love advice service in Oshiete goo.
Bayesian spacetime gap filling for inference on extreme hotspots an application to Red Sea surface temperatures ; We develop a method for probabilistic prediction of extreme value hotspots in a spatiotemporal framework, tailored to big datasets containing important gaps. In this setting, direct calculation of summaries from data, such as the minimum over a spacetime domain, is not possible. To obtain predictive distributions for such cluster summaries, we propose a twostep approach. We first model marginal distributions with a focus on accurate modeling of the right tail and then, after transforming the data to a standard Gaussian scale, we estimate a Gaussian spacetime dependence model defined locally in the time domain for the spacetime subregions where we want to predict. In the first step, we detrend the mean and standard deviation of the data and fit a spatially resolved generalized Pareto distribution to apply a correction of the upper tail. To ensure spatial smoothness of the estimated trends, we either pool data using nearestneighbor techniques, or apply generalized additive regression modeling. To cope with high spacetime resolution of data, the local Gaussian models use a Markov representation of the Mat'ern correlation function based on the stochastic partial differential equations SPDE approach. In the second step, they are fitted in a Bayesian framework through the integrated nested Laplace approximation implemented in RINLA. Finally, posterior samples are generated to provide statistical inferences through MonteCarlo estimation. Motivated by the 2019 Extreme Value Analysis data challenge, we illustrate our approach to predict the distribution of local spacetime minima in anomalies of Red Sea surface temperatures, using a gridded dataset 11315 days, 16703 pixels with artificially generated gaps. In particular, we show the improved performance of our twostep approach over a purely Gaussian model without tail transformations.
Modelling Hierarchical Structure between Dialogue Policy and Natural Language Generator with Option Framework for Taskoriented Dialogue System ; Designing taskoriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained endtoend E2E models with supervised learning SL, however, the bias in annotated system utterances remains as a bottleneck. Reinforcement learning RL deals with the problem through using nondifferentiable evaluation metrics e.g., the success rate as rewards. Nonetheless, existing works with RL showed that the comprehensibility of generated system utterances could be corrupted when improving the performance on fulfilling user requests. In our work, we 1 propose modelling the hierarchical structure between dialogue policy and natural language generator NLG with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations; 2 train HDNO via hierarchical reinforcement learning HRL, as well as suggest the asynchronous updates between dialogue policy and NLG during training to theoretically guarantee their convergence to a local maximizer; and 3 propose using a discriminator modelled with language models as an additional reward to further improve the comprehensibility. We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multidomain dialogues, in comparison with wordlevel E2E model trained with RL, LaRL and HDSA, showing improvements on the performance evaluated by automatic evaluation metrics and human evaluation. Finally, we demonstrate the semantic meanings of latent dialogue acts to show the explanability for HDNO.
Homeostasis in Networks with Multiple Input Nodes and Robustness in Bacterial Chemotaxis ; A biological system achieve homeostasis when there is a regulated quantity that is maintained within a narrow range of values. Here we consider homeostasis as a phenomenon of network dynamics. In this context, we improve a general theory for the analysis of homeostasis in network dynamical systems with distinguished input and output nodes, called inputoutput networks'. The theory allows one to define homeostasis types' of a given network in a model independent' fashion, in the sense that the classification depends on the network topology rather than on the specific model equations. Each homeostasis type' represents a possible mechanism for generating homeostasis and is associated with a suitable subnetwork motif' of the original network. Our contribution is an extension of the theory to the case of networks with multiple input nodes. To showcase our theory, we apply it to bacterial chemotaxis, a paradigm for homeostasis in biochemical systems. By considering a representative model of Escherichia coli chemotaxis, we verify that the corresponding abstract network has multiple input nodes. Thus showing that our extension of the theory allows for the inclusion of an important class of models that were previously out of reach. Moreover, from our abstract point of view, the occurrence of homeostasis in the studied model is caused by a new mechanism, called input counterweight homeostasis. This new homeostasis mechanism was discovered in the course of our investigation and is generated by a balancing between the several input nodes of the network therefore, it requires the existence of at least two input nodes to occur. Finally, the framework developed here allows one to formalize a notion of robustness' of homeostasis based on the concept of genericity' from the theory dynamical systems. We discuss how this kind of robustness of homeostasis appears in the chemotaxis model.
A Generative Model to Synthesize EEG Data for Epileptic Seizure Prediction ; Prediction of seizure before they occur is vital for bringing normalcy to the lives of patients. Researchers employed machine learning methods using handcrafted features for seizure prediction. However, ML methods are too complicated to select the best ML model or best features. Deep Learning methods are beneficial in the sense of automatic feature extraction. One of the roadblocks for accurate seizure prediction is scarcity of epileptic seizure data. This paper addresses this problem by proposing a deep convolutional generative adversarial network to generate synthetic EEG samples. We use two methods to validate synthesized data namely, oneclass SVM and a new proposal which we refer to as convolutional epileptic seizure predictor CESP. Another objective of our study is to evaluate performance of wellknown deep learning models e.g., VGG16, VGG19, ResNet50, and Inceptionv3 by training models on augmented data using transfer learning with average time of 10 min between true prediction and seizure onset. Our results show that CESP model achieves sensitivity of 78.11 and 88.21, and FPR of 0.27h and 0.14h for training on synthesized and testing on real Epilepsyecosystem and CHBMIT datasets, respectively. Effective results of CESP trained on synthesized data shows that synthetic data acquired the correlation between features and labels very well. We also show that employment of idea of transfer learning and data augmentation in patientspecific manner provides highest accuracy with sensitivity of 90.03 and 0.03 FPRh which was achieved using Inceptionv3, and that augmenting data with samples generated from DCGAN increased prediction results of our CESP model and Inceptionv3 by 45 as compared to stateoftheart traditional augmentation techniques. Finally, we note that prediction results of CESP achieved by using augmented data are better than chance level for both datasets.
A Federated Learning Aggregation Algorithm for Pervasive Computing Evaluation and Comparison ; Pervasive computing promotes the installation of connected devices in our living spaces in order to provide services. Two major developments have gained significant momentum recently an advanced use of edge resources and the integration of machine learning techniques for engineering applications. This evolution raises major challenges, in particular related to the appropriate distribution of computing elements along an edgetocloud continuum. About this, Federated Learning has been recently proposed for distributed model training in the edge. The principle of this approach is to aggregate models learned on distributed clients in order to obtain a new, more general model. The resulting model is then redistributed to clients for further training. To date, the most popular federated learning algorithm uses coordinatewise averaging of the model parameters for aggregation. However, it has been shown that this method is not adapted in heterogeneous environments where data is not identically and independently distributed noniid. This corresponds directly to some pervasive computing scenarios where heterogeneity of devices and users challenges machine learning with the double objective of generalization and personalization. In this paper, we propose a novel aggregation algorithm, termed FedDist, which is able to modify its model architecture here, deep neural network by identifying dissimilarities between specific neurons amongst the clients. This permits to account for clients' specificity without impairing generalization. Furthermore, we define a complete method to evaluate federated learning in a realistic way taking generalization and personalization into account. Using this method, FedDist is extensively tested and compared with three stateoftheart federated learning algorithms on the pervasive domain of Human Activity Recognition with smartphones.
Integrability and RG flow in 2d sigma models ; Motivated by the search for solvable string theories, we consider the problem of classifying the integrable bosonic 2d sigmamodels. We include nonconformal sigmamodels, which have historically been a good arena for discovering integrable models that were later generalized to Weylinvariant ones. General sigmamodels feature a quantum RG flow, given by a 'generalized Ricci flow' of the targetspace geometry. This thesis is based on the conjecture that integrable sigmamodels are renormalizable, or stable under the RG flow. It is widely understood that classically integrable theories are stable at the leading 1loop order with only a few parameters running. Here we address what happens at higherloop orders. We find that integrable sigmamodels generally remain RGstable at higherloops provided they receive a particular choice of finite counterterms, or quantum alpha' corrections to the targetspace geometry. We explicitly construct these quantum corrections for examples of integrable eta and lambdadeformed sigmamodels. We then reformulate the lambdamodels as sigmamodels on a tripled G times G times G configuration space, where they become automatically renormalizable due to manifest symmetries and a decoupling of some fields. We also consider the integrable G times G and G times GH models and construct a new class of integrable G times GH models with abelian H. We then present a new and different link between integrability and the RG flow in the context of sigmamodels with 'local couplings' depending explicitly on 2d time. Such models are naturally obtained in the lightcone gauge in string theory, pointing to the possibility of a large, new class of solvable string models.
Debiasing pipeline improves deep learning model generalization for Xray based lung nodule detection ; Lung cancer is the leading cause of cancer death worldwide and a good prognosis depends on early diagnosis. Unfortunately, screening programs for the early diagnosis of lung cancer are uncommon. This is inpart due to the atrisk groups being located in rural areas far from medical facilities. Reaching these populations would require a scaled approach that combines mobility, low cost, speed, accuracy, and privacy. We can resolve these issues by combining the chest Xray imaging mode with a federated deeplearning approach, provided that the federated model is trained on homogenous data to ensure that no single data source can adversely bias the model at any point in time. In this study we show that an image preprocessing pipeline that homogenizes and debiases chest Xray images can improve both internal classification and external generalization, paving the way for a lowcost and accessible deep learningbased clinical system for lung cancer screening. An evolutionary pruning mechanism is used to train a nodule detection deep learning model on the most informative images from a publicly available lung nodule Xray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is performed using all combinations of lung field segmentation, close cropping, and rib suppression operators. We show that this preprocessing pipeline results in deep learning models that successfully generalize an independent lung nodule dataset using ablation studies to assess the contribution of each operator in this pipeline. In stripping chest Xray images of known confounding variables by lung field segmentation, along with suppression of signal noise from the bone structure we can train a highly accurate deep learning lung nodule detection algorithm with outstanding generalization accuracy of 89 to nodule samples in unseen data.
Writhed Analytical Magnetic Flux Rope Model ; Observations of magnetic clouds, within interplanetary coronal mass ejections ICMEs, are often well described by flux rope models. Most of these assume either a cylindrical or toroidal geometry. In some cases, these models are also capable of accounting for nonaxisymmetric crosssections but they generally all assume axial invariance. It can be expected that any ICME, and its flux rope, will be deformed along its axis due to influences such as the solar wind. In this work, we aim to develop a writhed analytical magnetic flux rope model which would allow us to analytically describe a flux rope structure with varying curvature and torsion so that we are no longer constrained to a cylindrical or toroidal geometry. In this first iteration of our model we will solely focus on a circular crosssection of constant size. We describe our flux rope geometry in terms of a parametrized flux rope axis and a parallel transport frame. We derive expressions for the axial and poloidal magnetic field components under the assumption that the total axial magnetic flux is conserved. We find an entire class of possible solutions, which differ by the choice of integration constants, and present the results for a specific example. In general, we find that the twist of the magnetic field locally changes when the geometry deviates from a cylinder or torus. This new approach also allows us to generate completely new types of in situ magnetic field profiles which strongly deviate from those generated by cylindrical or toroidal models.
FlowGNN A Dataflow Architecture for RealTime WorkloadAgnostic Graph Neural Network Inference ; Graph neural networks GNNs have recently exploded in popularity thanks to their broad applicability to graphrelated problems such as quantum chemistry, drug discovery, and high energy physics. However, meeting demand for novel GNN models and fast inference simultaneously is challenging due to the gap between developing efficient accelerators and the rapid creation of new GNN models. Prior art focuses on accelerating specific classes of GNNs, such as Graph Convolutional Networks GCN, but lacks generality to support a wide range of existing or new GNN models. Furthermore, most works rely on graph preprocessing to exploit data locality, making them unsuitable for realtime applications. To address these limitations, in this work, we propose a generic dataflow architecture for GNN acceleration, named FlowGNN, which is generalizable to the majority of messagepassing GNNs. The contributions are threefold. First, we propose a novel and scalable dataflow architecture, which generally supports a wide range of GNN models with messagepassing mechanism. The architecture features a configurable dataflow optimized for simultaneous computation of node embedding, edge embedding, and message passing, which is generally applicable to all models. We also propose a rich library of modelspecific components. Second, we deliver ultrafast realtime GNN inference without any graph preprocessing, making it agnostic to dynamically changing graph structures. Third, we verify our architecture on the Xilinx Alveo U50 FPGA board and measure the onboard endtoend performance. We achieve a speedup of up to 24254x against CPU 6226R and 1.3477x against GPU A6000 with batch sizes 1 through 1024; we also outperform the SOTA GNN accelerator IGCN by 1.26x speedup and 1.55x energy efficiency over four datasets. Our implementation code and onboard measurement are publicly available on GitHub.
FaceDubbing LipSynchronous, Voice Preserving Translation of Videos ; In this paper, we propose a neural endtoend system for voice preserving, lipsynchronous translation of videos. The system is designed to combine multiple component models and produces a video of the original speaker speaking in the target language that is lipsynchronous with the target speech, yet maintains emphases in speech, voice characteristics, face video of the original speaker. The pipeline starts with automatic speech recognition including emphasis detection, followed by a translation model. The translated text is then synthesized by a TexttoSpeech model that recreates the original emphases mapped from the original sentence. The resulting synthetic voice is then mapped back to the original speakers' voice using a voice conversion model. Finally, to synchronize the lips of the speaker with the translated audio, a conditional generative adversarial networkbased model generates frames of adapted lip movements with respect to the input face image as well as the output of the voice conversion model. In the end, the system combines the generated video with the converted audio to produce the final output. The result is a video of a speaker speaking in another language without actually knowing it. To evaluate our design, we present a user study of the complete system as well as separate evaluations of the single components. Since there is no available dataset to evaluate our whole system, we collect a test set and evaluate our system on this test set. The results indicate that our system is able to generate convincing videos of the original speaker speaking the target language while preserving the original speaker's characteristics. The collected dataset will be shared.
COMET Coverageguided Model Generation For Deep Learning Library Testing ; Recent deep learning DL applications are mostly built on top of DL libraries. The quality assurance of these libraries is critical to the dependable deployment of DL applications. Techniques have been proposed to generate various DL models and apply them to test these libraries. However, their test effectiveness is constrained by the diversity of layer API calls in their generated DL models. Our study reveals that these techniques can cover at most 34.1 layer inputs, 25.9 layer parameter values, and 15.6 layer sequences. As a result, we find that many bugs arising from specific layer API calls i.e., specific layer inputs, parameter values, or layer sequences can be missed by existing techniques. Because of this limitation, we propose COMET to effectively generate DL models with diverse layer API calls for DL library testing. COMET 1 designs a set of mutation operators and a coveragebased search algorithm to diversify layer inputs, layer parameter values, and layer sequences in DL models. 2 proposes a model synthesis method to boost the test efficiency without compromising the layer API call diversity. Our evaluation result shows that COMET outperforms baselines by covering twice as many layer inputs 69.7 vs. 34.1, layer parameter values 50.2 vs. 25.9, and layer sequences 39.0 vs. 15.6 as those by the stateoftheart. Moreover, COMET covers 3.4 more library branches than those by existing techniques. Finally, COMET detects 32 new bugs in the latest version of eight popular DL libraries, including TensorFlow and MXNet, with 21 of them confirmed by DL library developers and 7 of those confirmed bugs have been fixed by developers.
SHAARP An OpenSource Package for Analytical and Numerical Modeling of Optical Second Harmonic Generation in Anisotropic Crystals ; Optical second harmonic generation is a secondorder nonlinear process that combines two photons of a given frequency into a third photon at twice the frequency. Due to the symmetry constraints, it is widely used as a sensitive probe to detect broken inversion symmetry and local polar order. Analytical modeling of the electricdipole SHG response is essential to extract fundamental properties of materials from experiments. However, complexity builds up dramatically in the analytical model when the probed crystal is of a low bulk crystal symmetry, with a lowsymmetry surface orientation, exhibits absorption and dispersion, and consists of multiple interfaces. As a result, there is a largely uneven landscape in the literature on the SHG modeling of new materials, involving numerous approximations and a wide range of inaccuracies, leading to a rather scattered dataset of reported SHG nonlinear susceptibility. Towards streamlining the reliability and accuracy of this process, we have developed an opensource package called the Second Harmonic Analysis of Anisotropic Rotational Polarimetry SHAARP which derives analytical solutions and performs numerical simulations of reflection SHG from a single interface for homogeneous crystals. Five key generalizations in SHG modeling are implemented, including all crystal symmetries down to triclinic, any crystal orientation, complex dielectric tensor refractive indices with frequency dispersion, and general polarization states of the light. SHAARP enables accurate anisotropic modeling of SHG response for a broad range of materials systems. The method is extendible to multiple interfaces. The code is free to download from httpsgithub.comRuiZuSHAARP
Rethinking Generalization The Impact of Annotation Style on Medical Image Segmentation ; Generalization is an important attribute of machine learning models, particularly for those that are to be deployed in a medical context, where unreliable predictions can have real world consequences. While the failure of models to generalize across datasets is typically attributed to a mismatch in the data distributions, performance gaps are often a consequence of biases in the 'groundtruth' label annotations. This is particularly important in the context of medical image segmentation of pathological structures e.g. lesions, where the annotation process is much more subjective, and affected by a number underlying factors, including the annotation protocol, rater educationexperience, and clinical aims, among others. In this paper, we show that modeling annotation biases, rather than ignoring them, poses a promising way of accounting for differences in annotation style across datasets. To this end, we propose a generalized conditioning framework to 1 learn and account for different annotation styles across multiple datasets using a single model, 2 identify similar annotation styles across different datasets in order to permit their effective aggregation, and 3 finetune a fully trained model to a new annotation style with just a few samples. Next, we present an imageconditioning approach to model annotation styles that correlate with specific image features, potentially enabling detection biases to be more easily identified.
From fat droplets to floating forests crossdomain transfer learning using a PatchGANbased segmentation model ; Many scientific domains gather sufficient labels to train machine algorithms through humanintheloop techniques provided by the Zooniverse.org citizen science platform. As the range of projects, task types and data rates increase, acceleration of model training is of paramount concern to focus volunteer effort where most needed. The application of Transfer Learning TL between Zooniverse projects holds promise as a solution. However, understanding the effectiveness of TL approaches that pretrain on largescale generic image sets vs. images with similar characteristics possibly from similar tasks is an open challenge. We apply a generative segmentation model on two Zooniverse projectbased data sets 1 to identify fat droplets in liver cells FatChecker; FC and 2 the identification of kelp beds in satellite images Floating Forests; FF through transfer learning from the first project. We compare and contrast its performance with a TL model based on the COCO image set, and subsequently with baseline counterparts. We find that both the FC and COCO TL models perform better than the baseline cases when using 75 of the original training sample size. The COCObased TL model generally performs better than the FCbased one, likely due to its generalized features. Our investigations provide important insights into usage of TL approaches on multidomain data hosted across different Zooniverse projects, enabling future projects to accelerate task completion.
OPTIML Scaling Language Model Instruction Meta Learning through the Lens of Generalization ; Recent work has shown that finetuning large pretrained language models on a collection of tasks described via instructions, a.k.a. instructiontuning, improves their zero and fewshot generalization to unseen tasks. However, there is a limited understanding of the performance tradeoffs of different decisions made during the instructiontuning process. These decisions include the scale and diversity of the instructiontuning benchmark, different task sampling strategies, finetuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the finetuning objectives themselves. In this paper, we characterize the effect of instructiontuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPTIML Bench a large benchmark for Instruction MetaLearning IML of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations to tasks from fully heldout categories, to heldout tasks from seen categories, and to heldout instances from seen tasks. Through the lens of this framework, we first present insights about instructiontuning decisions as applied to OPT30B and further exploit these insights to train OPTIML 30B and 175B, which are instructiontuned versions of OPT. OPTIML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats PromptSource, FLAN, SuperNaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models finetuned on each specific benchmark. We release OPTIML at both scales, together with the OPTIML Bench evaluation framework.
An Experiencebased Direct Generation approach to Automatic Image Cropping ; Automatic Image Cropping is a challenging task with many practical downstream applications. The task is often divided into subproblems generating cropping candidates, finding the visually important regions, and determining aesthetics to select the most appealing candidate. Prior approaches model one or more of these subproblems separately, and often combine them sequentially. We propose a novel convolutional neural network CNN based method to crop images directly, without explicitly modeling image aesthetics, evaluating multiple crop candidates, or detecting visually salient regions. Our model is trained on a large dataset of images cropped by experienced editors and can simultaneously predict bounding boxes for multiple fixed aspect ratios. We consider the aspect ratio of the cropped image to be a critical factor that influences aesthetics. Prior approaches for automatic image cropping, did not enforce the aspect ratio of the outputs, likely due to a lack of datasets for this task. We, therefore, benchmark our method on public datasets for two related tasks first, aesthetic image cropping without regard to aspect ratio, and second, thumbnail generation that requires fixed aspect ratio outputs, but where aesthetics are not crucial. We show that our strategy is competitive with or performs better than existing methods in both these tasks. Furthermore, our onestage model is easier to train and significantly faster than existing twostage or endtoend methods for inference. We present a qualitative evaluation study, and find that our model is able to generalize to diverse images from unseen datasets and often retains compositional properties of the original images after cropping. Our results demonstrate that explicitly modeling image aesthetics or visual attention regions is not necessarily required to build a competitive image cropping algorithm.
Regulating ChatGPT and other Large Generative AI Models ; Large generative AI models LGAIMs, such as ChatGPT, GPT4 or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering 1 direct regulation, 2 data protection, 3 content moderation, and 4 policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and nonprofessional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pretrained models. The paper argues for three layers of obligations concerning LGAIMs minimum standards for all LGAIMs; highrisk obligations for highrisk use cases; collaborations along the AI value chain. In general, regulation should focus on concrete highrisk applications, and not the pretrained model itself, and should include i obligations regarding transparency and ii risk management. Nondiscrimination provisions iii may, however, apply to LGAIM developers. Lastly, iv the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.
Adversarial Transformer Language Models for Contextual Commonsense Inference ; Contextualized or discourse aware commonsense inference is the task of generating coherent commonsense assertions i.e., facts from a given story, and a particular sentence from that story. Some problems with the task are lack of controllability for topics of the inferred facts; lack of commonsense knowledge during training; and, possibly, hallucinated or false facts. In this work, we utilize a transformer model for this task and develop techniques to address the aforementioned problems in the task. We control the inference by introducing a new technique we call hinting. Hinting is a kind of language model prompting, that utilizes both hard prompts specific words and soft prompts virtual learnable templates. This serves as a control signal to advise the language model what to talk about. Next, we establish a methodology for performing joint inference with multiple commonsense knowledge bases. Joint inference of commonsense requires care, because it is imprecise and the level of generality is more flexible. You want to be sure that the results still make sense for the context. To this end, we align the textual version of assertions from three knowledge graphs ConceptNet, ATOMIC2020, and GLUCOSE with a story and a target sentence. This combination allows us to train a single model to perform joint inference with multiple knowledge graphs. We show experimental results for the three knowledge graphs on joint inference. Our final contribution is exploring a GAN architecture that generates the contextualized commonsense assertions and scores them as to their plausibility through a discriminator. The result is an integrated system for contextual commonsense inference in stories, that can controllably generate plausible commonsense assertions, and takes advantage of joint inference between multiple commonsense knowledge bases.
HARDC A novel ECGbased heartbeat classification method to detect arrhythmia using hierarchical attention based dual structured RNN with dilated CNN ; In this paper have developed a novel hybrid hierarchical attentionbased bidirectional recurrent neural network with dilated CNN HARDC method for arrhythmia classification. This solves problems that arise when traditional dilated convolutional neural network CNN models disregard the correlation between contexts and gradient dispersion. The proposed HARDC fully exploits the dilated CNN and bidirectional recurrent neural network unit BiGRUBiLSTM architecture to generate fusion features. As a result of incorporating both local and global feature information and an attention mechanism, the model's performance for prediction is improved.By combining the fusion features with a dilated CNN and a hierarchical attention mechanism, the trained HARDC model showed significantly improved classification results and interpretability of feature extraction on the PhysioNet 2017 challenge dataset. Sequential ZScore normalization, filtering, denoising, and segmentation are used to prepare the raw data for analysis. CGAN Conditional Generative Adversarial Network is then used to generate synthetic signals from the processed data. The experimental results demonstrate that the proposed HARDC model significantly outperforms other existing models, achieving an accuracy of 99.60, F1 score of 98.21, a precision of 97.66, and recall of 99.60 using MITBIH generated ECG. In addition, this approach substantially reduces run time when using dilated CNN compared to normal convolution. Overall, this hybrid model demonstrates an innovative and costeffective strategy for ECG signal compression and highperformance ECG recognition. Our results indicate that an automated and highly computed method to classify multiple types of arrhythmia signals holds considerable promise.
Adversarial Nibbler A DataCentric Challenge for Improving the Safety of TexttoImage Models ; The generative AI revolution in recent years has been spurred by an expansion in compute power and data quantity, which together enable extensive pretraining of powerful texttoimage T2I models. With their greater capabilities to generate realistic and creative content, these T2I models like DALLE, MidJourney, Imagen or Stable Diffusion are reaching ever wider audiences. Any unsafe behaviors inherited from pretraining on uncurated internetscraped datasets thus have the potential to cause widereaching harm, for example, through generated images which are violent, sexually explicit, or contain biased and derogatory stereotypes. Despite this risk of harm, we lack systematic and structured evaluation datasets to scrutinize model behavior, especially adversarial attacks that bypass existing safety filters. A typical bottleneck in safety evaluation is achieving a wide coverage of different types of challenging examples in the evaluation set, i.e., identifying 'unknown unknowns' or longtail problems. To address this need, we introduce the Adversarial Nibbler challenge. The goal of this challenge is to crowdsource a diverse set of failure modes and reward challenge participants for successfully finding safety vulnerabilities in current stateoftheart T2I models. Ultimately, we aim to provide greater awareness of these issues and assist developers in improving the future safety and reliability of generative AI models. Adversarial Nibbler is a datacentric challenge, part of the DataPerf challenge suite, organized and supported by Kaggle and MLCommons.
Using generative AI to investigate medical imagery models and datasets ; AI models have shown promise in many medical imaging tasks. However, our ability to explain what signals these models have learned is severely lacking. Explanations are needed in order to increase the trust in AIbased models, and could enable novel scientific discovery by uncovering signals in the data that are not yet known to experts. In this paper, we present a method for automatic visual explanations leveraging teambased expertise by generating hypotheses of what visual signals in the images are correlated with the task. We propose the following 4 steps i Train a classifier to perform a given task ii Train a classifier guided StyleGANbased image generator StylEx iii Automatically detect and visualize the top visual attributes that the classifier is sensitive towards iv Formulate hypotheses for the underlying mechanisms, to stimulate future research. Specifically, we present the discovered attributes to an interdisciplinary panel of experts so that hypotheses can account for social and structural determinants of health. We demonstrate results on eight prediction tasks across three medical imaging modalities retinal fundus photographs, external eye photographs, and chest radiographs. We showcase examples of attributes that capture clinically known features, confounders that arise from factors beyond physiological mechanisms, and reveal a number of physiologically plausible novel attributes. Our approach has the potential to enable researchers to better understand, improve their assessment, and extract new knowledge from AIbased models. Importantly, we highlight that attributes generated by our framework can capture phenomena beyond physiology or pathophysiology, reflecting the real world nature of healthcare delivery and sociocultural factors. Finally, we intend to release code to enable researchers to train their own StylEx models and analyze their predictive tasks.
Thrilled by Your Progress Large Language Models GPT4 No Longer Struggle to Pass Assessments in Higher Education Programming Courses ; This paper studies recent developments in large language models' LLM abilities to pass assessments in introductory and intermediate Python programming courses at the postsecondary level. The emergence of ChatGPT resulted in heated debates of its potential uses e.g., exercise generation, code explanation as well as misuses in programming classes e.g., cheating. Recent studies show that while the technology performs surprisingly well on diverse sets of assessment instruments employed in typical programming classes the performance is usually not sufficient to pass the courses. The release of GPT4 largely emphasized notable improvements in the capabilities related to handling assessments originally designed for human testtakers. This study is the necessary analysis in the context of this ongoing transition towards mature generative AI systems. Specifically, we report the performance of GPT4, comparing it to the previous generations of GPT models, on three Python courses with assessments ranging from simple multiplechoice questions no code involved to complex programming projects with code bases distributed into multiple files 599 exercises overall. Additionally, we analyze the assessments that were not handled well by GPT4 to understand the current limitations of the model, as well as its capabilities to leverage feedback provided by an autograder. We found that the GPT models evolved from completely failing the typical programming class' assessments the original GPT3 to confidently passing the courses with no human involvement GPT4. While we identified certain limitations in GPT4's handling of MCQs and coding exercises, the rate of improvement across the recent generations of GPT models strongly suggests their potential to handle almost any type of assessment widely used in higher education programming courses. These findings could be leveraged by educators and institutions to adapt the design of programming assessments as well as to fuel the necessary discussions into how programming classes should be updated to reflect the recent technological developments. This study provides evidence that programming instructors need to prepare for a world in which there is an easytouse widely accessible technology that can be utilized by learners to collect passing scores, with no effort whatsoever, on what today counts as viable programming knowledge and skills assessments.
ZeroShot Dense Video Captioning by Jointly Optimizing Text and Moment ; Dense video captioning, a task of localizing meaningful moments and generating relevant captions for videos, often requires a large, expensive corpus of annotated video segments paired with text. In an effort to minimize the annotation cost, we propose ZeroTA, a novel method for dense video captioning in a zeroshot manner. Our method does not require any videos or annotations for training; instead, it localizes and describes events within each input video at test time by optimizing solely on the input. This is accomplished by introducing a soft moment mask that represents a temporal segment in the video and jointly optimizing it with the prefix parameters of a language model. This joint optimization aligns a frozen language generation model i.e., GPT2 with a frozen visionlanguage contrastive model i.e., CLIP by maximizing the matching score between the generated text and a moment within the video. We also introduce a pairwise temporal IoU loss to let a set of soft moment masks capture multiple distinct events within the video. Our method effectively discovers diverse significant events within the video, with the resulting captions appropriately describing these events. The empirical results demonstrate that ZeroTA surpasses zeroshot baselines and even outperforms the stateoftheart fewshot method on the widelyused benchmark ActivityNet Captions. Moreover, our method shows greater robustness compared to supervised methods when evaluated in outofdomain scenarios. This research provides insight into the potential of aligning widelyused models, such as language generation models and visionlanguage models, to unlock a new capability understanding temporal aspects of videos.
Generative Language Models on Nucleotide Sequences of Human Genes ; Language models, primarily transformerbased ones, obtained colossal success in NLP. To be more precise, studies like BERT in NLU and works such as GPT3 for NLG are very crucial. DNA sequences are very close to natural language in terms of structure, so if the DNArelated bioinformatics domain is concerned, discriminative models, like DNABert, exist. Yet, the generative side of the coin is mainly unexplored to the best of our knowledge. Consequently, we focused on developing an autoregressive generative language model like GPT3 for DNA sequences. Because working with whole DNA sequences is challenging without substantial computational resources, we decided to carry out our study on a smaller scale, focusing on nucleotide sequences of human genes, unique parts in DNA with specific functionalities, instead of the whole DNA. This decision did not change the problem structure a lot due to the fact that both DNA and genes can be seen as 1D sequences consisting of four different nucleotides without losing much information and making too much simplification. First of all, we systematically examined an almost entirely unexplored problem and observed that RNNs performed the best while simple techniques like Ngrams were also promising. Another beneficial point was learning how to work with generative models on languages we do not understand, unlike natural language. How essential using reallife tasks beyond the classical metrics such as perplexity is observed. Furthermore, checking whether the datahungry nature of these models can be changed through selecting a language with minimal vocabulary size, four owing to four different types of nucleotides, is examined. The reason for reviewing this was that choosing such a language might make the problem easier. However, what we observed in this study was it did not provide that much of a change in the amount of data needed.
InverseSR 3D Brain MRI SuperResolution Using a Latent Diffusion Model ; Highresolution HR MRI scans obtained from researchgrade medical centers provide precise information about imaged tissues. However, routine clinical MRI scans are typically in lowresolution LR and vary greatly in contrast and spatial resolution due to the adjustments of the scanning parameters to the local needs of the medical center. Endtoend deep learning methods for MRI superresolution SR have been proposed, but they require retraining each time there is a shift in the input distribution. To address this issue, we propose a novel approach that leverages a stateoftheart 3D brain generative model, the latent diffusion model LDM trained on UK BioBank, to increase the resolution of clinical MRI scans. The LDM acts as a generative prior, which has the ability to capture the prior distribution of 3D T1weighted brain MRI. Based on the architecture of the brain LDM, we find that different methods are suitable for different settings of MRI SR, and thus propose two novel strategies 1 for SR with more sparsity, we invert through both the decoder of the LDM and also through a deterministic Denoising Diffusion Implicit Models DDIM, an approach we will call InverseSRLDM; 2 for SR with less sparsity, we invert only through the LDM decoder, an approach we will call InverseSRDecoder. These two approaches search different latent spaces in the LDM model to find the optimal latent code to map the given LR MRI into HR. The training process of the generative model is independent of the MRI undersampling process, ensuring the generalization of our method to many MRI SR problems with different input measurements. We validate our method on over 100 brain T1w MRIs from the IXI dataset. Our method can demonstrate that powerful priors given by LDM can be used for MRI reconstruction.
Modelbased causal feature selection for general response types ; Discovering causal relationships from observational data is a fundamental yet challenging task. In some applications, it may suffice to learn the causal features of a given response variable, instead of learning the entire underlying causal structure. Invariant causal prediction ICP, Peters et al., 2016 is a method for causal feature selection which requires data from heterogeneous settings. ICP assumes that the mechanism for generating the response from its direct causes is the same in all settings and exploits this invariance to output a subset of the causal features. The framework of ICP has been extended to general additive noise models and to nonparametric settings using conditional independence testing. However, nonparametric conditional independence testing often suffers from low power or poor type I error control and the aforementioned parametric models are not suitable for applications in which the response is not measured on a continuous scale, but rather reflects categories or counts. To bridge this gap, we develop ICP in the context of transformation models TRAMs, allowing for continuous, categorical, counttype, and uninformatively censored responses we show that, in general, these model classes do not allow for identifiability when there is no exogenous heterogeneity. We propose TRAMGCM, a test for invariance of a subset of covariates, based on the expected conditional covariance between environments and score residuals which satisfies uniform asymptotic level guarantees. For the special case of linear shift TRAMs, we propose an additional invariance test, TRAMWald, based on the Wald statistic. We implement both proposed methods in the opensource R package tramicp and show in simulations that under the correct model specification, our approach empirically yields higher power than nonparametric ICP based on conditional independence testing.
General SU2Ltimes SU2R times U1EM Sigma Model With External Sources, Dynamical Breaking And Spontaneous Vacuum Symmetry Breaking ; We give a general SU2Ltimes SU2R times U1EM sigma model with external sources, dynamical breaking and spontaneous vacuum symmetry breaking, and present the general formulation of the model. It is found that sigma and pi 0 without electric charges have electromagnetic interaction effects coming from their internal structure. A general Lorentz transformation relative to external sources Jgauge JAmu,JAmu kappa is derived, using the general Lorentz transformation and the fourdimensional current of nuclear matter of the ground state with Jgauge 0, we give the fourdimensional general relations between the different currents of nuclear matter systems with Jgaugeneq 0 and those with Jgauge0. The relation of the density's coupling with external magnetic field is derived, which conforms well to dense nuclear matter in a strong magnetic field. We show different condensed effects in strong interaction about fermions and antifermions, and give the concrete scalar and pseudoscalar condensed expressions of sigma0 and pi0 bosons. About different dynamical breaking and spontaneous vacuum symmetry breaking, the concrete expressions of different mass spectra are obtained in field theory. This paper acquires the running spontaneous vacuum breaking value sigma0prime, and obtains the spontaneous vacuum breaking in terms of the running sigma0prime, which make nucleon, sigma and pi particles gain effective masses. We achieve both the effect of external sources and nonvanishing value of the condensed scalar and pseudoscalar paticles. It is deduced that the masses of nucleons, sigma and pi generally depend on different external sources.
FinslerLagrange Geometries and Standard Theories in Physics New Methods in Einstein and String Gravity ; In this article, we review the current status of FinslerLagrange geometry and generalizations. The goal is to aid nonexperts on Finsler spaces, but physicists and geometers skilled in general relativity and particle theories, to understand the crucial importance of such geometric methods for applications in modern physics. We also would like to orient mathematicians working in generalized Finsler and Kahler geometry and geometric mechanics how they could perform their results in order to be accepted by the community ''orthodox'' physicists. Although the bulk of former models of FinslerLagrange spaces where elaborated on tangent bundles, the surprising result advocated in our works is that such locally anisotropic structures can be modelled equivalently on RiemannCartan spaces, even as exact solutions in Einstein andor string gravity, if nonholonomic distributions and moving frames of references are introduced into consideration. We also propose a canonical scheme when geometrical objects on a pseudo Riemannian space are nonholonomically deformed into generalized Lagrange, or Finsler, configurations on the same manifold or on a corresponding tangent bundle. Such canonical transforms are defined by the coefficients of a prime metric it can be a solution of the Einstein equations and generate target spaces as generalized Lagrange structures, their models of almost Hermitian Kahler, or nonholonomic Riemann spaces with constant curvature, for some Finsler like connections. There are formulated the criteria when such constructions can be redefined equivalently in terms of the Levi Civita connection.
The puzzle of metallicity and multiple stellar populations in the Globular Clusters in Fornax ; We examine the photometric data for Fornax clusters, focussing our attention on their horizontal branch color distribution and, when available, on the RR Lyr variables fraction and period distribution. Based on our understanding of the HB morphology in terms of varying helium content in the context of multiple stellar generations, we show that clusters F2, F3 and F5 must contain substantial fractions of second generation stars 5465. On the basis of a simple chemical evolution model we show that the helium distribution in these clusters can be reproduced by models with cluster initial masses ranging from values equal to 4 to 10 times larger than the current masses. Models with a very short second generation star formation episode can also reproduce the observed helium distribution but require larger initial masses up to about twenty times the current mass. While the lower limit of this range of possible initial GC masses is consistent with those suggested by the observations of the low metallicity field stars, we also discuss the possibility that the metallicity scale of field stars based on CaII triplet spectroscopy and the metallicities derived for the clusters in Fornax may not be consistent with each other. The reproduction of the HB morphology in F2,F3,F5 requires two interesting hypotheses 1 the first generation HB stars lie all at red colours. According to this interpretation, the low metallicity stars in the field of Fornax, populating the HB at colours bluer than the blue side VIo0.3 or BVo0.2 of the RR Lyrs, should be second generation stars born in the clusters;a preliminary analysis of available colour surveys of Fornax field provides a fraction 20 of blue HB stars, in the low metallicity range; 2 the mass loss from individual second generation red giants is a few percent of a solar mass larger than the mass loss from first generation stars.
Community detection in general stochastic block models fundamental limits and efficient recovery algorithms ; New phase transition phenomena have recently been discovered for the stochastic block model, for the special case of two nonoverlapping symmetric communities. This gives raise in particular to new algorithmic challenges driven by the thresholds. This paper investigates whether a general phenomenon takes place for multiple communities, without imposing symmetry. In the general stochastic block model textSBMn,p,Q, n vertices are split into k communities of relative size pii in k, and vertices in community i and j connect independently with probability Qi,ji,j in k. This paper investigates the partial and exact recovery of communities in the general SBM in the constant and logarithmic degree regimes, and uses the generality of the results to tackle overlapping communities. The contributions of the paper are i an explicit characterization of the recovery threshold in the general SBM in terms of a new divergence function D, which generalizes the Hellinger and Chernoff divergences, and which provides an operational meaning to a divergence function analog to the KLdivergence in the channel coding theorem, ii the development of an algorithm that recovers the communities all the way down to the optimal threshold and runs in quasilinear time, showing that exact recovery has no informationtheoretic to computational gap for multiple communities, in contrast to the conjectures made for detection with more than 4 communities; note that the algorithm is optimal both in terms of achieving the threshold and in having quasilinear complexity, iii the development of an efficient algorithm that detects communities in the constant degree regime with an explicit accuracy bound that can be made arbitrarily close to 1 when a prescribed signaltonoise ratio defined in term of the spectrum of diagpQ tends to infinity.
Fast cosmic web simulations with generative adversarial networks ; Dark matter in the universe evolves through gravity to form a complex network of halos, filaments, sheets and voids, that is known as the cosmic web. Computational models of the underlying physical processes, such as classical Nbody simulations, are extremely resource intensive, as they track the action of gravity in an expanding universe using billions of particles as tracers of the cosmic matter distribution. Therefore, upcoming cosmology experiments will face a computational bottleneck that may limit the exploitation of their full scientific potential. To address this challenge, we demonstrate the application of a machine learning technique called Generative Adversarial Networks GAN to learn models that can efficiently generate new, physically realistic realizations of the cosmic web. Our training set is a small, representative sample of 2D image snapshots from Nbody simulations of size 500 and 100 Mpc. We show that the GANgenerated samples are qualitatively and quantitatively very similar to the originals. For the larger boxes of size 500 Mpc, it is very difficult to distinguish them visually. The agreement of the power spectrum Pk is 12 for most of the range, between k0.06 and k0.4. An important advantage of generating cosmic web realizations with a GAN is the considerable gains in terms of computation time. Each new sample generated by a GAN takes a fraction of a second, compared to the many hours needed by traditional Nbody techniques. We anticipate that the use of generative models such as GANs will therefore play an important role in providing extremely fast and precise simulations of cosmic web in the era of large cosmological surveys, such as Euclid and Large Synoptic Survey Telescope LSST.
A Neural Vocoder with Hierarchical Generation of Amplitude and Phase Spectra for Statistical Parametric Speech Synthesis ; This paper presents a neural vocoder named HiNet which reconstructs speech waveforms from acoustic features by predicting amplitude and phase spectra hierarchically. Different from existing neural vocoders such as WaveNet, SampleRNN and WaveRNN which directly generate waveform samples using single neural networks, the HiNet vocoder is composed of an amplitude spectrum predictor ASP and a phase spectrum predictor PSP. The ASP is a simple DNN model which predicts log amplitude spectra LAS from acoustic features. The predicted LAS are sent into the PSP for phase recovery. Considering the issue of phase warping and the difficulty of phase modeling, the PSP is constructed by concatenating a neural sourcefilter NSF waveform generator with a phase extractor. We also introduce generative adversarial networks GANs into both ASP and PSP. Finally, the outputs of ASP and PSP are combined to reconstruct speech waveforms by shorttime Fourier synthesis. Since there are no autoregressive structures in both predictors, the HiNet vocoder can generate speech waveforms with high efficiency. Objective and subjective experimental results show that our proposed HiNet vocoder achieves better naturalness of reconstructed speech than the conventional STRAIGHT vocoder, a 16bit WaveNet vocoder using open source implementation and an NSF vocoder with similar complexity to the PSP and obtains similar performance with a 16bit WaveRNN vocoder. We also find that the performance of HiNet is insensitive to the complexity of the neural waveform generator in PSP to some extend. After simplifying its model structure, the time consumed for generating 1s waveforms of 16kHz speech using a GPU can be further reduced from 0.34s to 0.19s without significant quality degradation.
Nonlinear 3D Cosmic Web Simulation with HeavyTailed Generative Adversarial Networks ; Fast and accurate simulations of the nonlinear evolution of the cosmic density field are a major component of many cosmological analyses, but the computational time and storage required to run them can be exceedingly large. For this reason, we use generative adversarial networks GANs to learn a compressed representation of the 3D matter density field that is fast and easy to sample, and for the first time show that GANs are capable of generating samples at the level of accuracy of other conventional methods. Using subvolumes from a suite of GADGET2 Nbody simulations, we demonstrate that a deepconvolutional GAN can generate samples that capture both large and smallscale features of the matter density field, as validated through a variety of npoint statistics. The use of a data scaling that preserves highdensity features and a heavytailed latent space prior allow us to obtain state of the art results for fast 3D cosmic web generation. In particular, the mean power spectra from generated samples agree to within 5 up to k3 and within 10 for k5 when compared with Nbody simulations, and similar accuracy is obtained for a variety of bispectra. By modeling the latent space with a heavytailed prior rather than a standard Gaussian, we better capture sample variance in the highdensity voxel PDF and reduce errors in power spectrum and bispectrum covariance on all scales. Furthermore, we show that a conditional GAN can smoothly interpolate between samples conditioned on redshift. Deep generative models, such as the ones described in this work, provide great promise as fast, lowmemory, highfidelity forward models of largescale structure.
Polynomial algebras from su3 and the generic model on the two sphere ; Construction of superintegrable systems based on Lie algebras have been introduced over the years. However, these approaches depend on explicit realisations, for instance as a differential operators, of the underlying Lie algebra. This is also the case for the construction of their related symmetry algebra which take usually the form of a finitely generated quadratic algebra. These algebras often display structure constants which depend on the central elements and in particular on the Hamiltonian. In this paper, we develop a new approach reexamining the case of the generic superintegrable systems on the 2sphere for which a symmetry algebra is known to be the Racah algebra R3. Such a model is related to the 59 2D superintegrable systems on conformally flat spaces and their 12 equivalence classes. We demonstrate that using further polynomials of degree 2,3 and 4 in the enveloping algebra of su3 one can generate an algebra based only on abstract commutation relations of su3 Lie algebra without explicit constraints on the representations or realisations. This construction relies on the maximal Abelian subalgebra, also called MASA, which are the Cartan generators and their commutant. We obtain a new 6dimensional cubic algebra where the structure constant are integer numbers which reduce from a quartic algebra for which the structure constant depend on the Cartan generator and the Casimir invariant. We also present other form of the symmetry algebra using the quadratic and cubic Casimir invariants of su3. It reduces as the known quadratic Racah algebra R3 only when using an explicit realization. This algebraic structure describe the symmetry of the generic superintegrable systems on the 2 sphere. We also present a contraction to another 6dimensional cubic algebra which would corresponding to the symmetry algebra of a SmorodinskyWinternitz model.
Image Comes Dancing with Collaborative ParsingFlow Video Synthesis ; Transferring human motion from a source to a target person poses great potential in computer vision and graphics applications. A crucial step is to manipulate sequential future motion while retaining the appearance characteristic.Previous work has either relied on crafted 3D human models or trained a separate model specifically for each target person, which is not scalable in practice.This work studies a more general setting, in which we aim to learn a single model to parsimoniously transfer motion from a source video to any target person given only one image of the person, named as Collaborative ParsingFlow Network CPFNet. The paucity of information regarding the target person makes the task particularly challenging to faithfully preserve the appearance in varying designated poses. To address this issue, CPFNet integrates the structured human parsing and appearance flow to guide the realistic foreground synthesis which is merged into the background by a spatiotemporal fusion module. In particular, CPFNet decouples the problem into stages of human parsing sequence generation, foreground sequence generation and final video generation. The human parsing generation stage captures both the pose and the body structure of the target. The appearance flow is beneficial to keep details in synthesized frames. The integration of human parsing and appearance flow effectively guides the generation of video frames with realistic appearance. Finally, the dedicated designed fusion network ensure the temporal coherence. We further collect a large set of human dancing videos to push forward this research field. Both quantitative and qualitative results show our method substantially improves over previous approaches and is able to generate appealing and photorealistic target videos given any input person image. All source code and dataset will be released at httpsgithub.comxiezhy6CPFNet.
Generative Adversarial Network GAN and Enhanced Root Mean Square Error ERMSE Deep Learning for Stock Price Movement Prediction ; The prediction of stock price movement direction is significant in financial circles and academic. Stock price contains complex, incomplete, and fuzzy information which makes it an extremely difficult task to predict its development trend. Predicting and analysing financial data is a nonlinear, timedependent problem. With rapid development in machine learning and deep learning, this task can be performed more effectively by a purposely designed network. This paper aims to improve prediction accuracy and minimizing forecasting error loss through deep learning architecture by using Generative Adversarial Networks. It was proposed a generic model consisting of Phasespace Reconstruction PSR method for reconstructing price series and Generative Adversarial Network GAN which is a combination of two neural networks which are Long ShortTerm Memory LSTM as Generative model and Convolutional Neural Network CNN as Discriminative model for adversarial training to forecast the stock market. LSTM will generate new instances based on historical basic indicators information and then CNN will estimate whether the data is predicted by LSTM or is real. It was found that the Generative Adversarial Network GAN has performed well on the enhanced root mean square error to LSTM, as it was 4.35 more accurate in predicting the direction and reduced processing time and RMSE by 78 secs and 0.029, respectively. This study provides a better result in the accuracy of the stock index. It seems that the proposed system concentrates on minimizing the root mean square error and processing time and improving the direction prediction accuracy, and provides a better result in the accuracy of the stock index.
MaxMargin Works while Large Margin Fails Generalization without Uniform Convergence ; A major challenge in modern machine learning is theoretically understanding the generalization properties of overparameterized models. Many existing tools rely on uniform convergence UC, a property that, when it holds, guarantees that the test loss will be close to the training loss, uniformly over a class of candidate models. Nagarajan and Kolter 2019 show that in certain simple linear and neuralnetwork settings, any uniform convergence bound will be vacuous, leaving open the question of how to prove generalization in settings where UC fails. Our main contribution is proving novel generalization bounds in two such settings, one linear, and one nonlinear. We study the linear classification setting of Nagarajan and Kolter, and a quadratic ground truth function learned via a twolayer neural network in the nonlinear regime. We prove a new type of margin bound showing that above a certain signaltonoise threshold, any nearmaxmargin classifier will achieve almost no test loss in these two settings. Our results show that nearmaxmargin is important while any model that achieves at least a 1 epsilonfraction of the maxmargin generalizes well, a classifier achieving half of the maxmargin may fail terribly. Building on the impossibility results of Nagarajan and Kolter, under slightly stronger assumptions, we show that onesided UC bounds and classical margin bounds will fail on nearmaxmargin classifiers. Our analysis provides insight on why memorization can coexist with generalization we show that in this challenging regime where generalization occurs but UC fails, nearmaxmargin classifiers simultaneously contain some generalizable components and some overfitting components that memorize the data. The presence of the overfitting components is enough to preclude UC, but the nearextremal margin guarantees that sufficient generalizable components are present.
Freeform Lesion Synthesis Using a Partial Convolution Generative Adversarial Network for Enhanced Deep Learning Liver Tumor Segmentation ; Automatic deep learning segmentation models has been shown to improve both the segmentation efficiency and the accuracy. However, training a robust segmentation model requires considerably large labeled training samples, which may be impractical. This study aimed to develop a deep learning framework for generating synthetic lesions that can be used to enhance network training. The lesion synthesis network is a modified generative adversarial network GAN. Specifically, we innovated a partial convolution strategy to construct an Unetlike generator. The discriminator is designed using Wasserstein GAN with gradient penalty and spectral normalization. A mask generation method based on principal component analysis was developed to model various lesion shapes. The generated masks are then converted into liver lesions through a lesion synthesis network. The lesion synthesis framework was evaluated for lesion textures, and the synthetic lesions were used to train a lesion segmentation network to further validate the effectiveness of this framework. All the networks are trained and tested on the public dataset from LITS. The synthetic lesions generated by the proposed approach have very similar histogram distributions compared to the real lesions for the two employed texture parameters, GLCMenergy and GLCMcorrelation. The KullbackLeibler divergence of GLCMenergy and GLCMcorrelation were 0.01 and 0.10, respectively. Including the synthetic lesions in the tumor segmentation network improved the segmentation dice performance of UNet significantly from 67.3 to 71.4 p0.05. Meanwhile, the volume precision and sensitivity improve from 74.6 to 76.0 p0.23 and 66.1 to 70.9 p0.01, respectively. The synthetic data significantly improves the segmentation performance.
SongDriver Realtime Music Accompaniment Generation without Logical Latency nor Exposure Bias ; Realtime music accompaniment generation has a wide range of applications in the music industry, such as music education and live performances. However, automatic realtime music accompaniment generation is still understudied and often faces a tradeoff between logical latency and exposure bias. In this paper, we propose SongDriver, a realtime music accompaniment generation system without logical latency nor exposure bias. Specifically, SongDriver divides one accompaniment generation task into two phases 1 The arrangement phase, where a Transformer model first arranges chords for input melodies in realtime, and caches the chords for the next phase instead of playing them out. 2 The prediction phase, where a CRF model generates playable multitrack accompaniments for the coming melodies based on previously cached chords. With this twophase strategy, SongDriver directly generates the accompaniment for the upcoming melody, achieving zero logical latency. Furthermore, when predicting chords for a timestep, SongDriver refers to the cached chords from the first phase rather than its previous predictions, which avoids the exposure bias problem. Since the input length is often constrained under realtime conditions, another potential problem is the loss of longterm sequential information. To make up for this disadvantage, we extract four musical features from a longterm music piece before the current time step as global information. In the experiment, we train SongDriver on some opensource datasets and an original aiSong Dataset built from Chinesestyle modern pop music scores. The results show that SongDriver outperforms existing SOTA stateoftheart models on both objective and subjective metrics, meanwhile significantly reducing the physical latency.
WikiDes A WikipediaBased Dataset for Generating Short Descriptions from Paragraphs ; As free online encyclopedias with massive volumes of content, Wikipedia and Wikidata are key to many Natural Language Processing NLP tasks, such as information retrieval, knowledge base building, machine translation, text classification, and text summarization. In this paper, we introduce WikiDes, a novel dataset to generate short descriptions of Wikipedia articles for the problem of text summarization. The dataset consists of over 80k English samples on 6987 topics. We set up a twophase summarization method description generation Phase I and candidate ranking Phase II as a strong approach that relies on transfer and contrastive learning. For description generation, T5 and BART show their superiority compared to other smallscale pretrained models. By applying contrastive learning with the diverse input from beam search, the metric fusionbased ranking models outperform the direct description generation models significantly up to 22 ROUGE in topicexclusive split and topicindependent split. Furthermore, the outcome descriptions in Phase II are supported by human evaluation in over 45.33 chosen compared to 23.66 in Phase I against the gold descriptions. In the aspect of sentiment analysis, the generated descriptions cannot effectively capture all sentiment polarities from paragraphs while doing this task better from the gold descriptions. The automatic generation of new descriptions reduces the human efforts in creating them and enriches Wikidatabased knowledge graphs. Our paper shows a practical impact on Wikipedia and Wikidata since there are thousands of missing descriptions. Finally, we expect WikiDes to be a useful dataset for related works in capturing salient information from short paragraphs. The curated dataset is publicly available at httpsgithub.comdeclarelabWikiDes.
Emotion Selectable EndtoEnd Textbased Speech Editing ; Textbased speech editing allows users to edit speech by intuitively cutting, copying, and pasting text to speed up the process of editing speech. In the previous work, CampNet contextaware mask prediction network is proposed to realize textbased speech editing, significantly improving the quality of edited speech. This paper aims at a new task adding emotional effect to the editing speech during the textbased speech editing to make the generated speech more expressive. To achieve this task, we propose EmoCampNet emotion CampNet, which can provide the option of emotional attributes for the generated speech in textbased speech editing and has the oneshot ability to edit unseen speakers' speech. Firstly, we propose an endtoend emotionselectable textbased speech editing model. The key idea of the model is to control the emotion of generated speech by introducing additional emotion attributes based on the contextaware mask prediction network. Secondly, to prevent the emotion of the generated speech from being interfered by the emotional components in the original speech, a neutral content generator is proposed to remove the emotion from the original speech, which is optimized by the generative adversarial framework. Thirdly, two data augmentation methods are proposed to enrich the emotional and pronunciation information in the training set, which can enable the model to edit the unseen speaker's speech. The experimental results that 1 EmoCampNet can effectively control the emotion of the generated speech in the process of textbased speech editing; And can edit unseen speakers' speech. 2 Detailed ablation experiments further prove the effectiveness of emotional selectivity and data augmentation methods. The demo page is available at httpshairuo55.github.ioEmoCampNet
Identitydriven ThreePlayer Generative Adversarial Network for Syntheticbased Face Recognition ; Many of the commonly used datasets for face recognition development are collected from the internet without proper user consent. Due to the increasing focus on privacy in the social and legal frameworks, the use and distribution of these datasets are being restricted and strongly questioned. These databases, which have a realistically high variability of data per identity, have enabled the success of face recognition models. To build on this success and to align with privacy concerns, synthetic databases, consisting purely of synthetic persons, are increasingly being created and used in the development of face recognition solutions. In this work, we present a threeplayer generative adversarial network GAN framework, namely IDnet, that enables the integration of identity information into the generation process. The third player in our IDnet aims at forcing the generator to learn to generate identityseparable face images. We empirically proved that our IDnet synthetic images are of higher identity discrimination in comparison to the conventional twoplayer GAN, while maintaining a realistic intraidentity variation. We further studied the identity link between the authentic identities used to train the generator and the generated synthetic identities, showing very low similarities between these identities. We demonstrated the applicability of our IDnet data in training face recognition models by evaluating these models on a wide set of face recognition benchmarks. In comparison to the stateoftheart works in syntheticbased face recognition, our solution achieved comparable results to a recent renderingbased approach and outperformed all existing GANbased approaches. The training code and the synthetic face image dataset are publicly available httpsgithub.comfdbtrsSyntheticFaceRecognition .
Generative Flow Network for Listwise Recommendation ; Personalized recommender systems fulfill the daily demands of customers and boost online businesses. The goal is to learn a policy that can generate a list of items that matches the user's demand or interest. While most existing methods learn a pointwise scoring model that predicts the ranking score of each individual item, recent research shows that the listwise approach can further improve the recommendation quality by modeling the intralist correlations of items that are exposed together. This has motivated the recent list reranking and generative recommendation approaches that optimize the overall utility of the entire list. However, it is challenging to explore the combinatorial space of list actions and existing methods that use crossentropy loss may suffer from low diversity issues. In this work, we aim to learn a policy that can generate sufficiently diverse item lists for users while maintaining high recommendation quality. The proposed solution, GFN4Rec, is a generative method that takes the insight of the flow network to ensure the alignment between list generation probability and its reward. The key advantages of our solution are the log scale reward matching loss that intrinsically improves the generation diversity and the autoregressive item selection model that captures the item mutual influences while capturing future reward of the list. As validation of our method's effectiveness and its superior diversity during active exploration, we conduct experiments on simulated online environments as well as an offline evaluation framework for two realworld datasets.
PromptTTS 2 Describing and Generating Voices with Text Prompt ; Speech conveys more information than just text, as the same word can be uttered in various voices to convey diverse information. Compared to traditional texttospeech TTS methods relying on speech prompts reference speech for voice variability, using text prompts descriptions is more userfriendly since speech prompts can be hard to find or may not exist at all. TTS approaches based on the text prompt face two challenges 1 the onetomany problem, where not all details about voice variability can be described in the text prompt, and 2 the limited availability of text prompt datasets, where vendors and large cost of data labeling are required to write text prompt for speech. In this work, we introduce PromptTTS 2 to address these challenges with a variation network to provide variability information of voice not captured by text prompts, and a prompt generation pipeline to utilize the large language models LLM to compose high quality text prompts. Specifically, the variation network predicts the representation extracted from the reference speech which contains full information about voice based on the text prompt representation. For the prompt generation pipeline, it generates text prompts for speech with a speech understanding model to recognize voice attributes e.g., gender, speed from speech and a large language model to formulate text prompt based on the recognition results. Experiments on a largescale 44K hours speech dataset demonstrate that compared to the previous works, PromptTTS 2 generates voices more consistent with text prompts and supports the sampling of diverse voice variability, thereby offering users more choices on voice generation. Additionally, the prompt generation pipeline produces highquality prompts, eliminating the large labeling cost. The demo page of PromptTTS 2 is available onlinefootnotehttpsspeechresearch.github.ioprompttts2.
The Doppler peaks from a generic defect ; We investigate which of the exotic Doppler peak features found for textures and cosmic strings are generic novelties pertaining to defects. We find that the out of phase'' texture signature is an accident. Generic defects, when they generate a secondary peak structure similar to inflation, apply to it an additive shift. It is not necessary for this shift to be out of phase''. We also show which factors are responsible for the absence of secondary oscillations found for cosmic strings. Within this general analysis we finally consider the conditions under which topological defects and inflation can be confused. It is argued that only Omega1 inflation and a defect with a horizon size coherence length have a chance to be confused. Any other inflationary or defect model always differ distinctly. To appear in the proceedings of the XXXIth Moriond meeting, Microwave Background Anisotropies''
Probing the gravitational well No supernova explosion in spherical symmetry with general relativistic Boltzmann neutrino transport ; We report on the stellar core collapse, bounce, and postbounce evolution of a 13 solar mass star in a selfconsistent general relativistic spherically symmetric simulation based on Boltzmann neutrino transport. We conclude that approximations to exact neutrino transport and omission of general relativistic effects were not alone responsible for the failure of numerous preceding attempts to model supernova explosions in spherical symmetry. Compared to simulations in Newtonian gravity, the general relativistic simulation results in a smaller shock radius. We however argue that the higher neutrino luminosities and rms energies in the general relativistic case could lead to a larger supernova explosion energy.
Black hole versus cosmological horizon entropy ; The generalized second law of thermodynamics states that entropy always increases when all event horizons are attributed with an entropy proportional to their area. We test the generalized second law by investigating the change in entropy when dust, radiation and black holes cross a cosmological event horizon. We generalize for flat, open and closed FriedmannRobertsonWalker universes by using numerical calculations to determine the cosmological horizon evolution. In most cases the loss of entropy from within the cosmological horizon is more than balanced by an increase in cosmological event horizon entropy, maintaining the validity of the generalized second law of thermodynamics. However, an intriguing set of open universe models show an apparent entropy decrease when black holes disappear over the cosmological event horizon. We anticipate that this apparent violation of the generalized second law will disappear when solutions are available for black holes embedded in arbitrary backgrounds.
Monopole gravitational waves from relativistic fireballs driving gammaray bursts ; Einstein's general relativity predicts that pressure, in general stresses, play a similar role to energy density in generating gravity. The source of gravitational field, the active gravitational mass density, sometimes referred to as Whittaker's mass density, is not conserved, hence its changes can propagate as monopole gravitational waves. Such waves can be generated only by astrophysical sources with varying gravitational mass. Here we show that relativistic fireballs, considered in modelling gammaray burst phenomena, are likely to radiate monopole gravitational waves from highpressure plasma with varying Whittaker's mass. Also, ejection of a significant amount of initial massenergy of the progenitor contributes to the monopole gravitational radiation. We identify monopole waves with h11h22 waves of Eddington's classification which propagate in the zdirection together with the energy carried by massless fields. We show that the monopole waves satisfy Einstein's equations, with a common stressenergy tensor for massless fields. The polarization mode of monopole waves is Phi22, i.e. these are perpendicular waves which induce changes of the radius of a circle of test particles only breathing mode. The astrophysical importance of monopole gravitational waves is discussed.
On the structure of linedriven winds near black holes ; A general physical mechanism of the formation of linedriven winds at the vicinity of strong gravitational field sources is investigated in the frame of General Relativity. We argue that gravitational redshifting should be taken into account to model such outflows. The generalization of the Sobolev approximation in the frame of General Relativity is presented. We consider all processes in the metric of a nonrotating Schwarzschild black hole. The radiation force that is due to absorbtion of the radiation flux in lines is derived. It is demonstrated that if gravitational redshifting is taken into account, the radiation force becomes a function of the local velocity gradient as in the standard linedriven wind theory and the gradient of g00. We derive a general relativistic equation of motion describing such flow. A solution of the equation of motion is obtained and confronted with that obtained from the Castor, Abbott Klein CAK theory. It is shown that the proposed mechanism could have an important contribution to the formation of linedriven outflows from compact objects.
Can Nbody systems generate periodic gravitational waves ; None of Nbody gravitating systems have been considered to emit periodic gravitational waves because of their chaotic orbits when N3 or more. We employ a figureeight orbit as a specific model for a 3body system in order to illustrate that some of triple stars are capable of generating periodic waves. This illustration would imply that a certain class of Nbody gravitating systems may be relevant to the gravitational waves generation. We show also that the total angular momentum of this 3body system is not carried away by gravitational waves. A waveform generated by this system is volcanoshaped and thus different from that of a binary system. Finally, by evaluating the radiation reaction time scale, we give an orderofmagnitude estimate of merging event rates. The estimate suggests that figureeight sources, which require carefully prepared initial states, may be too rare to detect.
Trainable Methods for Surface Natural Language Generation ; We present three systems for surface natural language generation that are trainable from annotated corpora. The first two systems, called NLG1 and NLG2, require a corpus marked only with domainspecific semantic attributes, while the last system, called NLG3, requires a corpus marked with both semantic attributes and syntactic dependency information. All systems attempt to produce a grammatical natural language phrase from a domainspecific semantic representation. NLG1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step, while NLG2 and NLG3 use maximum entropy probability models to individually generate each word in the phrase. The systems NLG2 and NLG3 learn to determine both the word choice and the word order of the phrase. We present experiments in which we generate phrases to describe flights in the air travel domain.
Extended moduli spaces and the Kan construction.II.Lattice gauge theory ; Let Y be a CWcomplex with a single 0cell, K its Kan group, a model for the loop space of Y, and let G be a compact, connected Lie group. We give an explicit finite dimensional construction of generators of the equivariant cohomology of the geometric realization of the cosimplicial manifold romanHomK,G and hence of the space romanMapoY,BG of based maps from Y to the classifying space BG. For a smooth manifold Y, this may be viewed as a rigorous approach to lattice gauge theory, and we show that it then yields, i when romandimY2, equivariant de Rham representatives of generators of the equivariant cohomology of twisted representation spaces of the fundamental group of a closed surface including generators for moduli spaces of semi stable holomorphic vector bundles on complex curves so that, in particular, the known structure of a stratified symplectic space results; ii when romandimY3, equivariant cohomology generators including the ChernSimons function; iii when romandimY 4, the generators of the relevant equivariant cohomology from which for example Donaldson polynomials are obtained by evaluation against suitable fundamental classes corresponding to moduli spaces of ASD connections.
11Dimensional Methods for General Relativity ; This is an article contributed to the Brill Festschrift, in honor of the 60th birthday of Prof. D.R. Brill, which will appear in the Vol.2 of the Proceedings of the International Symposia on Directions in General Relativity. In this article we present the 11dimensional method for studying general relativity of 4dimensions. We first discuss the general formalism, and subsequently draw attention to the algebraically special class of spacetimes, following the Petrov classification. It is shown that this class of spacetimes can be described by the 11dimensional YangMills action interacting with matter fields, with the spacial diffeomorphisms of the 2surface as the gauge symmetry. The constraint appears polynomial in part, whereas the nonpolynomial part is a nonlinear sigma model type in 11dimensions. It is also shown that the representations of winftygravity appear naturally as special cases of this description, and we discuss briefly the winftygeometry in term of the fibre bundle.
Geometric Interpretation and Classification of Global Solutions in Generalized Dilaton Gravity ; Two dimensional gravity with torsion is proved to be equivalent to special types of generalized 2d dilaton gravity. E.g. in one version, the dilaton field is shown to be expressible by the extra scalar curvature, constructed for an independent Lorentz connection corresponding to a nontrivial torsion. Elimination of that dilaton field yields an equivalent torsionless theory, nonpolynomial in curvature. These theories, although locally equivalent exhibit quite different global properties of the general solution. We discuss the example of a torsionless dilaton theory equivalent to the R2 T2model. Each global solution of this model is shown to split into a set of global solutions of generalized dilaton gravity. In contrast to the theory with torsion the equivalent dilaton one exhibits solutions which are asymptotically flat in special ranges of the parameters. In the simplest case of ordinary dilaton gravity we clarify the well known problem of removing the Schwarzschild singularity by a field redefinition.
Generalized symmetries and invariant matter couplings in twodimensional dilaton gravity ; New features of the generalized symmetries of generic twodimensional dilaton models of gravity are presented and invariant gravitymatter couplings are introduced. We show that there is a continuum set of Noether symmetries, which contains half a de Witt algebra. Two of these symmetries are areapreserving transformations. We show that gravitymatter couplings which are invariant under area preserving transformations only contribute to the dynamics of the dilatongravity sector with a reshaping of the dilaton potential. The interaction with matter by means of invariant metrics is also considered. We show in a constructive way that there are metrics which are invariant under two of the symmetries. The most general metrics and minimal couplings that fulfil this condition are found.
A Classical Sequential Growth Dynamics for Causal Sets ; Starting from certain causality conditions and a discrete form of general covariance, we derive a very general family of classically stochastic, sequential growth dynamics for causal sets. The resulting theories provide a relatively accessible half way house'' to full quantum gravity that possibly contains the latter's classical limit general relativity. Because they can be expressed in terms of state models for an assembly of Ising spins living on the relations of the causal set, these theories also illustrate how nongravitational matter can arise dynamically from the causal set without having to be built in at the fundamental level. Additionally, our results bring into focus some interpretive issues of importance for causal set dynamics, and for quantum gravity more generally.
Relational evolution of the degrees of freedom of generally covariant quantum theories ; We study the classical and quantum dynamics of generally covariant theories with vanishing a Hamiltonian and with a finite number of degrees of freedom. In particular, the geometric meaning of the full solution of the relational evolution of the degrees of freedom is displayed, which means the determination of the total number of evolving constants of motion required. Also a method to find evolving constants is proposed. The generalized Heinsenberg picture needs M time variables, as opposed to the Heisenberg picture of standard quantum mechanics where one time variable t is enough. As an application, we study the parameterized harmonic oscillator and the SL2,R model with one physical degree of freedom that mimics the constraint structure of general relativity where a Schrodinger equation emerges in its quantum dynamics.