text
stringlengths
62
2.94k
MetaAuxiliary Learning for Adaptive Human Pose Prediction ; Predicting highfidelity future human poses, from a historically observed sequence, is decisive for intelligent robots to interact with humans. Deep endtoend learning approaches, which typically train a generic pretrained model on external datasets and then directly apply it to all test samples, emerge as the dominant solution to solve this issue. Despite encouraging progress, they remain nonoptimal, as the unique properties e.g., motion style, rhythm of a specific sequence cannot be adapted. More generally, at testtime, once encountering unseen motion categories outofdistribution, the predicted poses tend to be unreliable. Motivated by this observation, we propose a novel testtime adaptation framework that leverages two selfsupervised auxiliary tasks to help the primary forecasting network adapt to the test sequence. In the testing phase, our model can adjust the model parameters by several gradient updates to improve the generation quality. However, due to catastrophic forgetting, both auxiliary tasks typically tend to the low ability to automatically present the desired positive incentives for the final prediction performance. For this reason, we also propose a metaauxiliary learning scheme for better adaptation. In terms of general setup, our approach obtains higher accuracy, and under two new experimental designs for outofdistribution data unseen subjects and categories, achieves significant improvements.
Frequency Decomposition to Tap the Potential of Single Domain for Generalization ; Domain generalization DG, aiming at models able to work on multiple unseen domains, is a musthave characteristic of general artificial intelligence. DG based on single source domain training data is more challenging due to the lack of comparable information to help identify domain invariant features. In this paper, it is determined that the domain invariant features could be contained in the single source domain training samples, then the task is to find proper ways to extract such domain invariant features from the single source domain samples. An assumption is made that the domain invariant features are closely related to the frequency. Then, a new method that learns through multiple frequency domains is proposed. The key idea is, dividing the frequency domain of each original image into multiple subdomains, and learning features in the subdomain by a designed two branches network. In this way, the model is enforced to learn features from more samples of the specifically limited spectrum, which increases the possibility of obtaining the domain invariant features that might have previously been defiladed by easily learned features. Extensive experimental investigation reveals that 1 frequency decomposition can help the model learn features that are difficult to learn. 2 the proposed method outperforms the stateoftheart methods of singlesource domain generalization.
Domain Generalization for Mammographic Image Analysis with Contrastive Learning ; The deep learning technique has been shown to be effectively addressed several image analysis tasks in the computeraided diagnosis scheme for mammography. The training of an efficacious deep learning model requires large data with diverse styles and qualities. The diversity of data often comes from the use of various scanners of vendors. But, in practice, it is impractical to collect a sufficient amount of diverse data for training. To this end, a novel contrastive learning is developed to equip the deep learning models with better style generalization capability. Specifically, the multistyle and multiview unsupervised selflearning scheme is carried out to seek robust feature embedding against style diversity as a pretrained model. Afterward, the pretrained network is further finetuned to the downstream tasks, e.g., mass detection, matching, BIRADS rating, and breast density classification. The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets. The experimental results suggest that the proposed domain generalization method can effectively improve performance of four mammographic image tasks on the data from both seen and unseen domains, and outperform many stateoftheart SOTA generalization methods.
FIANCEE Faster Inference of Adversarial Networks via Conditional Early Exits ; Generative DNNs are a powerful tool for image synthesis, but they are limited by their computational load. On the other hand, given a trained model and a task, e.g. faces generation within a range of characteristics, the output image quality will be unevenly distributed among images with different characteristics. It follows, that we might restrain the models complexity on some instances, maintaining a high quality. We propose a method for diminishing computations by adding socalled early exit branches to the original architecture, and dynamically switching the computational path depending on how difficult it will be to render the output. We apply our method on two different SOTA models performing generative tasks generation from a semantic map, and crossreenactment of face expressions; showing it is able to output images with custom lowerquality thresholds. For a threshold of LPIPS 0.1, we diminish their computations by up to a half. This is especially relevant for realtime applications such as synthesis of faces, when quality loss needs to be contained, but most of the inputs need fewer computations than the complex instances.
Fully Autonomous Programming with Large Language Models ; Current approaches to program synthesis with Large Language Models LLMs exhibit a near miss syndrome they tend to generate programs that semantically resemble the correct answer as measured by text similarity metrics or human evaluation, but achieve a low or even zero accuracy as measured by unit tests due to small imperfections, such as the wrong input or output format. This calls for an approach known as Synthesize, Execute, Debug SED, whereby a draft of the solution is generated first, followed by a program repair phase addressing the failed tests. To effectively apply this approach to instructiondriven LLMs, one needs to determine which prompts perform best as instructions for LLMs, as well as strike a balance between repairing unsuccessful programs and replacing them with newly generated ones. We explore these tradeoffs empirically, comparing replacefocused, repairfocused, and hybrid debug strategies, as well as different templatebased and modelbased promptgeneration techniques. We use OpenAI Codex as the LLM and Program Synthesis Benchmark 2 as a database of problem descriptions and tests for evaluation. The resulting framework outperforms both conventional usage of Codex without the repair phase and traditional genetic programming approaches.
Generalizing Neural Human Fitting to Unseen Poses With Articulated SE3 Equivariance ; We address the problem of fitting a parametric human body model SMPL to point cloud data. Optimizationbased methods require careful initialization and are prone to becoming trapped in local optima. Learningbased methods address this but do not generalize well when the input pose is far from those seen during training. For rigid point clouds, remarkable generalization has been achieved by leveraging SE3equivariant networks, but these methods do not work on articulated objects. In this work we extend this idea to human bodies and propose ArtEq, a novel partbased SE3equivariant neural architecture for SMPL model estimation from point clouds. Specifically, we learn a part detection network by leveraging local SO3 invariance, and regress shape and pose using articulated SE3 shapeinvariant and poseequivariant networks, all trained endtoend. Our novel pose regression module leverages the permutationequivariant property of selfattention layers to preserve rotational equivariance. Experimental results show that ArtEq generalizes to poses not seen during training, outperforming stateoftheart methods by 44 in terms of body reconstruction accuracy, without requiring an optimization refinement step. Furthermore, ArtEq is three orders of magnitude faster during inference than prior work and has 97.3 fewer parameters. The code and model are available for research purposes at httpsarteq.is.tue.mpg.de.
CEIL A General ClassificationEnhanced Iterative Learning Framework for Text Clustering ; Text clustering, as one of the most fundamental challenges in unsupervised learning, aims at grouping semantically similar text segments without relying on human annotations. With the rapid development of deep learning, deep clustering has achieved significant advantages over traditional clustering methods. Despite the effectiveness, most existing deep text clustering methods rely heavily on representations pretrained in general domains, which may not be the most suitable solution for clustering in specific target domains. To address this issue, we propose CEIL, a novel ClassificationEnhanced Iterative Learning framework for short text clustering, which aims at generally promoting the clustering performance by introducing a classification objective to iteratively improve feature representations. In each iteration, we first adopt a language model to retrieve the initial text representations, from which the clustering results are collected using our proposed Category Disentangled Contrastive Clustering CDCC algorithm. After strict data filtering and aggregation processes, samples with clean category labels are retrieved, which serve as supervision information to update the language model with the classification objective via a prompt learning approach. Finally, the updated language model with improved representation ability is used to enhance clustering in the next iteration. Extensive experiments demonstrate that the CEIL framework significantly improves the clustering performance over iterations, and is generally effective on various clustering algorithms. Moreover, by incorporating CEIL on CDCC, we achieve the stateoftheart clustering performance on a wide range of short text clustering benchmarks outperforming other strong baseline methods.
Quantum Generative Adversarial Networks For Anomaly Detection In High Energy Physics ; The standard model SM of particle physics represents a theoretical paradigm for the description of the fundamental forces of nature. Despite its broad applicability, the SM does not enable the description of all physically possible events. The detection of events that cannot be described by the SM, which are typically referred to as anomalous, and the related potential discovery of exotic physical phenomena is a nontrivial task. The challenge becomes even greater with nextgeneration colliders that will produce even more events with additional levels of complexity. The additional data complexity motivates the search for unsupervised anomaly detection methods that do not require prior knowledge about the underlying models. In this work, we develop such a technique. More explicitly, we employ a quantum generative adversarial network to identify anomalous events. The method learns the background distribution from SM data and, then, determines whether a given event is characteristic for the learned background distribution. The proposed quantumpowered anomaly detection strategy is tested on proofofprinciple examples using numerical simulations and IBM Quantum processors. We find that the quantum generative techniques using ten times fewer training data samples can yield comparable accuracy to the classical counterpart for the detection of the Graviton and Higgs particles. Additionally, we empirically compute the capacity of the quantum model and observe an improved expressivity compared to its classical counterpart.
Beyond Classification Financial Reasoning in StateoftheArt Language Models ; Large Language Models LLMs, consisting of 100 billion or more parameters, have demonstrated remarkable ability in complex multistep reasoning tasks. However, the application of such generic advancements has been limited to a few fields, such as clinical or legal, with the field of financial reasoning remaining largely unexplored. To the best of our knowledge, the ability of LLMs to solve financial reasoning problems has never been dealt with, and whether it can be performed at any scale remains unknown. To address this knowledge gap, this research presents a comprehensive investigation into the potential application of LLMs in the financial domain. The investigation includes a detailed exploration of a range of subjects, including task formulation, synthetic data generation, prompting methods, and evaluation capability. Furthermore, the study benchmarks various GPT variants with parameter scales ranging from 2.8B to 13B, with and without instruction tuning, on diverse dataset sizes. By analyzing the results, we reveal that the ability to generate coherent financial reasoning first emerges at 6B parameters, and continues to improve with better instructiontuning or larger datasets. Additionally, the study provides a publicly accessible dataset named sFIOG SyntheticFinancial Investment Opinion Generation, consisting of 11,802 synthetic investment thesis samples, to support further research in the field of financial reasoning. Overall, this research seeks to contribute to the understanding of the efficacy of language models in the field of finance, with a particular emphasis on their ability to engage in sophisticated reasoning and analysis within the context of investment decisionmaking.
CodeIE Large Code Generation Models are Better FewShot Information Extractors ; Large language models LLMs pretrained on massive corpora have demonstrated impressive fewshot learning ability on many NLP tasks. A common practice is to recast the task into a texttotext format such that generative LLMs of natural language NLLLMs like GPT3 can be prompted to solve it. However, it is nontrivial to perform information extraction IE tasks with NLLLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code CodeLLMs such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NLLLMs, we show that CodeLLMs can be wellaligned with these IE tasks by designing codestyle prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms finetuning moderatesize pretrained models specially designed for IE tasks e.g., UIE and prompting NLLLMs under fewshot settings. We further conduct a series of indepth analyses to demonstrate the merits of leveraging CodeLLMs for IE tasks.
A Review of Machine Learning Applications for the Proton Magnetic Resonance Spectroscopy Workflow ; This literature review presents a comprehensive overview of machine learning ML applications in proton magnetic resonance spectroscopy MRS. As the use of ML techniques in MRS continues to grow, this review aims to provide the MRS community with a structured overview of the stateoftheart methods. Specifically, we examine and summarize studies published between 2017 and 2023 from major journals in the magnetic resonance field. We categorize these studies based on a typical MRS workflow, including data acquisition, processing, analysis, and artificial data generation. Our review reveals that ML in MRS is still in its early stages, with a primary focus on processing and analysis techniques, and less attention given to data acquisition. We also found that many studies use similar model architectures, with little comparison to alternative architectures. Additionally, the generation of artificial data is a crucial topic, with no consistent method for its generation. Furthermore, many studies demonstrate that artificial data suffers from generalization issues when tested on invivo data. We also conclude that risks related to ML models should be addressed, particularly for clinical applications. Therefore, output uncertainty measures and model biases are critical to investigate. Nonetheless, the rapid development of ML in MRS and the promising results from the reviewed studies justify further research in this field.
What You See is What You Read Improving TextImage Alignment Evaluation ; Automatically determining whether a text and a corresponding image are semantically aligned is a significant challenge for visionlanguage models, with applications in generative texttoimage and imagetotext tasks. In this work, we study methods for automatic textimage alignment evaluation. We first introduce SeeTRUE a comprehensive evaluation set, spanning multiple datasets from both texttoimage and imagetotext generation tasks, with human judgements for whether a given textimage pair is semantically aligned. We then describe two automatic methods to determine alignment the first involving a pipeline based on question generation and visual question answering models, and the second employing an endtoend classification approach by finetuning multimodal pretrained models. Both methods surpass prior approaches in various textimage alignment tasks, with significant improvements in challenging cases that involve complex composition or unnatural images. Finally, we demonstrate how our approaches can localize specific misalignments between an image and a given text, and how they can be used to automatically rerank candidates in texttoimage generation.
Domain Generalization Deep Graph Transformation ; Graph transformation that predicts graph transition from one mode to another is an important and common problem. Despite much progress in developing advanced graph transformation techniques in recent years, the fundamental assumption typically required in machinelearning models that the testing and training data preserve the same distribution does not always hold. As a result, domain generalization graph transformation that predicts graphs not available in the training data is underexplored, with multiple key challenges to be addressed including 1 the extreme space complexity when training on all inputoutput mode combinations, 2 difference of graph topologies between the input and the output modes, and 3 how to generalize the model to unseen target domains that are not in the training data. To fill the gap, we propose a multiinput, multioutput, hypernetworkbased graph neural network MultiHyperGNN that employs a encoder and a decoder to encode topologies of both input and output modes and semisupervised link prediction to enhance the graph transformation task. Instead of training on all mode combinations, MultiHyperGNN preserves a constant space complexity with the encoder and the decoder produced by two novel hypernetworks. Comprehensive experiments show that MultiHyperGNN has a superior performance than competing models in both prediction and domain generalization tasks.
Text2NeRF TextDriven 3D Scene Generation with Neural Radiance Fields ; Textdriven 3D scene generation is widely applicable to video gaming, film industry, and metaverse applications that have a large demand for 3D scenes. However, existing textto3D generation methods are limited to producing 3D objects with simple geometries and dreamlike styles that lack realism. In this work, we present Text2NeRF, which is able to generate a wide range of 3D scenes with complicated geometric structures and highfidelity textures purely from a text prompt. To this end, we adopt NeRF as the 3D representation and leverage a pretrained texttoimage diffusion model to constrain the 3D reconstruction of the NeRF to reflect the scene description. Specifically, we employ the diffusion model to infer the textrelated image as the content prior and use a monocular depth estimation method to offer the geometric prior. Both content and geometric priors are utilized to update the NeRF model. To guarantee textured and geometric consistency between different views, we introduce a progressive scene inpainting and updating strategy for novel view synthesis of the scene. Our method requires no additional training data but only a natural language description of the scene as the input. Extensive experiments demonstrate that our Text2NeRF outperforms existing methods in producing photorealistic, multiview consistent, and diverse 3D scenes from a variety of natural language prompts.
Enhancing VisionLanguage PreTraining with Jointly Learned Questioner and Dense Captioner ; Large pretrained multimodal models have demonstrated significant success in a range of downstream tasks, including image captioning, imagetext retrieval, visual question answering VQA, etc. However, many of these methods rely on imagetext pairs collected from the web as pretraining data and unfortunately overlook the need for finegrained feature alignment between vision and language modalities, which requires detailed understanding of images and language expressions. While integrating VQA and dense captioning DC into pretraining can address this issue, acquiring imagequestionanswer as well as imagelocationcaption triplets is challenging and timeconsuming. Additionally, publicly available datasets for VQA and dense captioning are typically limited in scale due to manual data collection and labeling efforts. In this paper, we propose a novel method called Joint QA and DC GEneration JADE, which utilizes a pretrained multimodal model and easilycrawled imagetext pairs to automatically generate and filter largescale VQA and dense captioning datasets. We apply this method to the Conceptual Caption CC3M dataset to generate a new dataset called CC3MQADC. Experiments show that when used for pretraining in a multitask manner, CC3MQADC can improve the performance with various backbones on various downstream tasks. Furthermore, our generated CC3MQADC can be combined with larger imagetext datasets e.g., CC15M and achieve competitive results compared with models using much more data. Code and dataset are available at httpsgithub.comjohncagedOPTQuestioner.
SGGAN Fine StereoscopicAware Generation for 3D Brain Point Cloud Upsampling from a Single Image ; In minimallyinvasive brain surgeries with indirect and narrow operating environments, 3D brain reconstruction is crucial. However, as requirements of accuracy for some new minimallyinvasive surgeries such as braincomputer interface surgery are higher and higher, the outputs of conventional 3D reconstruction, such as point cloud PC, are facing the challenges that sample points are too sparse and the precision is insufficient. On the other hand, there is a scarcity of highdensity point cloud datasets, which makes it challenging to train models for direct reconstruction of highdensity brain point clouds. In this work, a novel model named stereoscopicaware graph generative adversarial network SGGAN with two stages is proposed to generate fine highdensity PC conditioned on a single image. The StageI GAN sketches the primitive shape and basic structure of the organ based on the given image, yielding StageI point clouds. The StageII GAN takes the results from StageI and generates highdensity point clouds with detailed features. The StageII GAN is capable of correcting defects and restoring the detailed features of the region of interest ROI through the upsampling process. Furthermore, a parameterfreeattentionbased freetransforming module is developed to learn the efficient features of input, while upholding a promising performance. Comparing with the existing methods, the SGGAN model shows superior performance in terms of visual quality, objective measurements, and performance in classification, as demonstrated by comprehensive results measured by several evaluation metrics including PCtoPC error and Chamfer distance.
Flexible GrammarBased Constrained Decoding for Language Models ; LLMs have shown impressive fewshot performance across many tasks. However, they still struggle when it comes to reliably generating complex output structures, such as those required for information extraction. This limitation stems from the fact that LLMs, without finetuning, tend to generate free text rather than structures precisely following a specific grammar. In this work, we propose to enrich the decoding with formal grammar constraints. More concretely, given ContextFree GrammarCFG, our framework ensures that the token generated in each decoding step would lead to a valid continuation compliant with the grammar production rules. This process guarantees the generation of valid sequences. Importantly, our framework can be readily combined with any CFG or decoding algorithm. We demonstrate that the outputs of many NLP tasks can be represented as formal languages, making them suitable for direct use in our framework. We conducted experiments with two challenging tasks involving large alphabets in their grammar Wikidata entities and relations information extraction and entity disambiguation. Our results with LLaMA models indicate that grammarconstrained decoding substantially outperforms unconstrained decoding and even competes with taskspecific finetuned models. These findings suggest that integrating grammarbased constraints during decoding holds great promise in making LLMs reliably produce structured outputs, especially in setting where training data is scarce and finetuning is expensive.
Federated Generalized Category Discovery ; Generalized category discovery GCD aims at grouping unlabeled samples from known and unknown classes, given labeled data of known classes. To meet the recent decentralization trend in the community, we introduce a practical yet challenging task, namely Federated GCD FedGCD, where the training data are distributively stored in local clients and cannot be shared among clients. The goal of FedGCD is to train a generic GCD model by client collaboration under the privacyprotected constraint. The FedGCD leads to two challenges 1 representation degradation caused by training each client model with fewer data than centralized GCD learning, and 2 highly heterogeneous label spaces across different clients. To this end, we propose a novel Associated Gaussian Contrastive Learning AGCL framework based on learnable GMMs, which consists of a Client Semantics Association CSA and a globallocal GMM Contrastive Learning GCL. On the server, CSA aggregates the heterogeneous categories of localclient GMMs to generate a global GMM containing more comprehensive category knowledge. On each client, GCL builds classlevel contrastive learning with both local and global GMMs. The local GCL learns robust representation with limited local data. The global GCL encourages the model to produce more discriminative representation with the comprehensive category relationships that may not exist in local data. We build a benchmark based on six visual datasets to facilitate the study of FedGCD. Extensive experiments show that our AGCL outperforms the FedAvgbased baseline on all datasets.
Is Summary Useful or Not An Extrinsic Human Evaluation of Text Summaries on Downstream Tasks ; Research on automated text summarization relies heavily on human and automatic evaluation. While recent work on human evaluation mainly adopted intrinsic evaluation methods, judging the generic quality of text summaries, e.g. informativeness and coherence, our work focuses on evaluating the usefulness of text summaries with extrinsic methods. We carefully design three different downstream tasks for extrinsic human evaluation of summaries, i.e., question answering, text classification and text similarity assessment. We carry out experiments using system rankings and user behavior data to evaluate the performance of different summarization models. We find summaries are particularly useful in tasks that rely on an overall judgment of the text, while being less effective for question answering tasks. The results show that summaries generated by finetuned models lead to higher consistency in usefulness across all three tasks, as rankings of finetuned summarization systems are close across downstream tasks according to the proposed extrinsic metrics. Summaries generated by models in the zeroshot setting, however, are found to be biased towards the text classification and similarity assessment tasks, due to its general and less detailed summary style. We further evaluate the correlation of 14 intrinsic automatic metrics with human criteria and show that intrinsic automatic metrics perform well in evaluating the usefulness of summaries in the questionanswering task, but are less effective in the other two tasks. This highlights the limitations of relying solely on intrinsic automatic metrics in evaluating the performance and usefulness of summaries.
ProlificDreamer HighFidelity and Diverse Textto3D Generation with Variational Score Distillation ; Score distillation sampling SDS has shown great promise in textto3D generation by distilling pretrained largescale texttoimage diffusion models, but suffers from oversaturation, oversmoothing, and lowdiversity problems. In this work, we propose to model the 3D parameter as a random variable instead of a constant as in SDS and present variational score distillation VSD, a principled particlebased variational framework to explain and address the aforementioned issues in textto3D generation. We show that SDS is a special case of VSD and leads to poor samples with both small and large CFG weights. In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight i.e., 7.5. We further present various improvements in the design space for textto3D such as distillation time schedule and density initialization, which are orthogonal to the distillation algorithm yet not well explored. Our overall approach, dubbed ProlificDreamer, can generate high rendering resolution i.e., 512times512 and highfidelity NeRF with rich structure and complex effects e.g., smoke and drops. Further, initialized from NeRF, meshes finetuned by VSD are meticulously detailed and photorealistic. Project page httpsml.cs.tsinghua.edu.cnprolificdreamer
Learning to Imagine VisuallyAugmented Natural Language Generation ; People often imagine relevant scenes to aid in the writing process. In this work, we aim to utilize visual information for composition in the same manner as humans. We propose a method, LIVE, that makes pretrained language models PLMs Learn to Imagine for Visuallyaugmented natural language gEneration. First, we imagine the scene based on the text we use a diffusion model to synthesize highquality images conditioned on the input texts. Second, we use CLIP to determine whether the text can evoke the imagination in a posterior way. Finally, our imagination is dynamic, and we conduct synthesis for each sentence rather than generate only one image for an entire paragraph. Technically, we propose a novel plugandplay fusion layer to obtain visuallyaugmented representations for each text. Our visiontext fusion layer is compatible with Transformerbased architecture. We have conducted extensive experiments on four generation tasks using BART and T5, and the automatic results and human evaluation demonstrate the effectiveness of our proposed method. We will release the code, model, and data at the link httpsgithub.comRUCAIBoxLIVE.
RAPHAEL TexttoImage Generation via Large Mixture of Diffusion Paths ; Texttoimage generation has recently witnessed remarkable achievements. We introduce a textconditional image diffusion model, termed RAPHAEL, to generate highly artistic images, which accurately portray the text prompts, encompassing multiple nouns, adjectives, and verbs. This is achieved by stacking tens of mixtureofexperts MoEs layers, i.e., spaceMoE and timeMoE layers, enabling billions of diffusion paths routes from the network input to the output. Each path intuitively functions as a painter for depicting a particular textual concept onto a specified image region at a diffusion timestep. Comprehensive experiments reveal that RAPHAEL outperforms recent cuttingedge models, such as Stable Diffusion, ERNIEViLG 2.0, DeepFloyd, and DALLE 2, in terms of both image quality and aesthetic appeal. Firstly, RAPHAEL exhibits superior performance in switching images across diverse styles, such as Japanese comics, realism, cyberpunk, and ink illustration. Secondly, a single model with three billion parameters, trained on 1,000 A100 GPUs for two months, achieves a stateoftheart zeroshot FID score of 6.61 on the COCO dataset. Furthermore, RAPHAEL significantly surpasses its counterparts in human evaluation on the ViLG300 benchmark. We believe that RAPHAEL holds the potential to propel the frontiers of image generation research in both academia and industry, paving the way for future breakthroughs in this rapidly evolving field. More details can be found on a webpage httpsraphaelpainter.github.io.
PolyDiffuse Polygonal Shape Reconstruction via Guided Set Diffusion Models ; This paper presents PolyDiffuse, a novel structured reconstruction algorithm that transforms visual sensor data into polygonal shapes with Diffusion Models DM, an emerging machinery amid exploding generative AI, while formulating reconstruction as a generation process conditioned on sensor data. The task of structured reconstruction poses two fundamental challenges to DM 1 A structured geometry is a set'' e.g., a set of polygons for a floorplan geometry, where a sample of N elements has N different but equivalent representations, making the denoising highly ambiguous; and 2 A reconstruction'' task has a single solution, where an initial noise needs to be chosen carefully, while any initial noise works for a generation task. Our technical contribution is the introduction of a Guided Set Diffusion Model where 1 the forward diffusion process learns guidance networks to control noise injection so that one representation of a sample remains distinct from its other permutation variants, thus resolving denoising ambiguity; and 2 the reverse denoising process reconstructs polygonal shapes, initialized and directed by the guidance networks, as a conditional generation process subject to the sensor data. We have evaluated our approach for reconstructing two types of polygonal shapes floorplan as a set of polygons and HD map for autonomous cars as a set of polylines. Through extensive experiments on standard benchmarks, we demonstrate that PolyDiffuse significantly advances the current state of the art and enables broader practical applications.
Image Vectorization a Review ; Nowadays, there are many diffusion and autoregressive models that show impressive results for generating images from text and other input domains. However, these methods are not intended for ultrahighresolution image synthesis. Vector graphics are devoid of this disadvantage, so the generation of images in this format looks very promising. Instead of generating vector images directly, you can first synthesize a raster image and then apply vectorization. Vectorization is the process of converting a raster image into a similar vector image using primitive shapes. Besides being similar, generated vector image is also required to contain the minimum number of shapes for rendering. In this paper, we focus specifically on machine learningcompatible vectorization methods. We are considering Mang2Vec, Deep Vectorization of Technical Drawings, DiffVG, and LIVE models. We also provide a brief overview of existing online methods. We also recall other algorithmic methods, Im2Vec and ClipGEN models, but they do not participate in the comparison, since there is no open implementation of these methods or their official implementations do not work correctly. Our research shows that despite the ability to directly specify the number and type of shapes, existing machine learning methods work for a very long time and do not accurately recreate the original image. We believe that there is no fast universal automatic approach and human control is required for every method.
DreamEdit Subjectdriven Image Editing ; Subjectdriven image generation aims at generating images containing customized subjects, which has recently drawn enormous attention from the research community. However, the previous works cannot precisely control the background and position of the target subject. In this work, we aspire to fill the void and propose two novel subjectdriven subtasks, i.e., Subject Replacement and Subject Addition. The new tasks are challenging in multiple aspects replacing a subject with a customized one can change its shape, texture, and color, while adding a target subject to a designated position in a provided scene necessitates a contextaware posture. To conquer these two novel tasks, we first manually curate a new dataset DreamEditBench containing 22 different types of subjects, and 440 source images with different difficulty levels. We plan to host DreamEditBench as a platform and hire trained evaluators for standard human evaluation. We also devise an innovative method DreamEditor to resolve these tasks by performing iterative generation, which enables a smooth adaptation to the customized subject. In this project, we conduct automatic and human evaluations to understand the performance of DreamEditor and baselines on DreamEditBench. For Subject Replacement, we found that the existing models are sensitive to the shape and color of the original subject. The model failure rate will dramatically increase when the source and target subjects are highly different. For Subject Addition, we found that the existing models cannot easily blend the customized subjects into the background smoothly, leading to noticeable artifacts in the generated image. We hope DreamEditBench can become a standard platform to enable future investigations toward building more controllable subjectdriven image editing. Our project homepage is httpsdreameditbenchteam.github.io.
Generating Synergistic Formulaic Alpha Collections via Reinforcement Learning ; In the field of quantitative trading, it is common practice to transform raw historical stock data into indicative signals for the market trend. Such signals are called alpha factors. Alphas in formula forms are more interpretable and thus favored by practitioners concerned with risk. In practice, a set of formulaic alphas is often used together for better modeling precision, so we need to find synergistic formulaic alpha sets that work well together. However, most traditional alpha generators mine alphas one by one separately, overlooking the fact that the alphas would be combined later. In this paper, we propose a new alphamining framework that prioritizes mining a synergistic set of alphas, i.e., it directly uses the performance of the downstream combination model to optimize the alpha generator. Our framework also leverages the strong exploratory capabilities of reinforcement learningRL to better explore the vast search space of formulaic alphas. The contribution to the combination models' performance is assigned to be the return used in the RL process, driving the alpha generator to find better alphas that improve upon the current set. Experimental evaluations on realworld stock market data demonstrate both the effectiveness and the efficiency of our framework for stock trend forecasting. The investment simulation results show that our framework is able to achieve higher returns compared to previous approaches.
Bulk Reconstruction from Generalized Free Fields ; We propose a generalized protocol for constructing a dual free bulk theory from any boundary model of generalized free fields GFFs. To construct the bulk operators, we employ a linear ansatz similar to the HamiltonKabatLiftschytz and Lowe HKLL construction. However, unlike the HKLL construction, our protocol relies only on boundary data with no presupposed form for the bulk equations of motion, so our reconstructed bulk is fully emergent. For a 11d bulk, imposing the bulk operator algebra as well as a causal structure is sufficient to determine the bulk operators and dynamics uniquely up to an unimportant local basis choice. We study the bulk construction for several twosided SYK models with and without coupling between the two sides, and find good agreement with known results in the lowtemperature conformal limit. In particular, we find bulk features consistent with the presence of a black hole horizon for the TFD state, and characterize the infalling fermion modes. We are also able to extract bulk quantities such as the curvature and bulk state correlators in terms of boundary quantities. In the presence of coupling between the two SYK models, we are able to observe evidence of the shockwave geometry and the traversable wormhole geometry using the twosided mutual information between the reconstructed bulk operators. Our results show evidence that features of the geometric bulk can survive away from the low temperature conformal limit. Furthermore, the generality of the protocol allows it to be applied to other boundary theories with no canonical holographic bulk.
Traceable GroupWise SelfOptimizing Feature Transformation Learning A Dual Optimization Perspective ; Feature transformation aims to reconstruct an effective representation space by mathematically refining the existing features. It serves as a pivotal approach to combat the curse of dimensionality, enhance model generalization, mitigate data sparsity, and extend the applicability of classical models. Existing research predominantly focuses on domain knowledgebased feature engineering or learning latent representations. However, these methods, while insightful, lack full automation and fail to yield a traceable and optimal representation space. An indispensable question arises Can we concurrently address these limitations when reconstructing a feature space for a machinelearning task Our initial work took a pioneering step towards this challenge by introducing a novel selfoptimizing framework. This framework leverages the power of three cascading reinforced agents to automatically select candidate features and operations for generating improved feature transformation combinations. Despite the impressive strides made, there was room for enhancing its effectiveness and generalization capability. In this extended journal version, we advance our initial work from two distinct yet interconnected perspectives 1 We propose a refinement of the original framework, which integrates a graphbased state representation method to capture the feature interactions more effectively and develop different Qlearning strategies to alleviate Qvalue overestimation further. 2 We utilize a new optimization technique actorcritic to train the entire selfoptimizing framework in order to accelerate the model convergence and improve the feature transformation performance. Finally, to validate the improved effectiveness and generalization capability of our framework, we perform extensive experiments and conduct comprehensive analyses.
Unimodular Theory of Gravity in Light of the Latest Cosmological Data ; The unimodular theory of gravity is an alternative perspective to traditional Einstein's general relativity and opens new possibilities for exploring its implications in cosmology. In this paper, we investigate the unimodular gravity UG with the latest cosmological data from the Pantheon sample of Type Ia supernovae SN, Baryon Acoustic Oscillations BAO, and the observational Hz data from Differential Age method DA. We consider a model consisting of a generalized cosmological constant with radiation and dark matter. The considered theory respects only unimodular coordinate transformations. We fit our model with lowredshift data from SN and DA and determine the value of parameter xi of the theory. We find the bestfit value of parameter xi 6.23 pm 0.5; which deviates from 6, for which the theory becomes the standard general theory of relativity. We further study the Hubble constant problem by combining the SN and DA data with BAO data. We observe deviation in the value of H0 from the standard LambdaCDM model. We obtain H0 as 70.7 pm 4.1 mboxKm s1 mboxMpc 1 and 69.24 pm 0.90 mboxKm s1 mboxMpc 1 from supernovae data and BAO data, respectively in unimodular gravity. Combining the BAO data with SNDA data set, we obtain H0 as 70.57 pm 0.56 mboxKm s1 mboxMpc 1.
From ChatGPT to ThreatGPT Impact of Generative AI in Cybersecurity and Privacy ; Undoubtedly, the evolution of Generative AI GenAI models has been the highlight of digital transformation in the year 2022. As the different GenAI models like ChatGPT and Google Bard continue to foster their complexity and capability, it's critical to understand its consequences from a cybersecurity perspective. Several instances recently have demonstrated the use of GenAI tools in both the defensive and offensive side of cybersecurity, and focusing on the social, ethical and privacy implications this technology possesses. This research paper highlights the limitations, challenges, potential risks, and opportunities of GenAI in the domain of cybersecurity and privacy. The work presents the vulnerabilities of ChatGPT, which can be exploited by malicious users to exfiltrate malicious information bypassing the ethical constraints on the model. This paper demonstrates successful example attacks like Jailbreaks, reverse psychology, and prompt injection attacks on the ChatGPT. The paper also investigates how cyber offenders can use the GenAI tools in developing cyber attacks, and explore the scenarios where ChatGPT can be used by adversaries to create social engineering attacks, phishing attacks, automated hacking, attack payload generation, malware creation, and polymorphic malware. This paper then examines defense techniques and uses GenAI tools to improve security measures, including cyber defense automation, reporting, threat intelligence, secure code generation and detection, attack identification, developing ethical guidelines, incidence response plans, and malware detection. We will also discuss the social, legal, and ethical implications of ChatGPT. In conclusion, the paper highlights open challenges and future directions to make this GenAI secure, safe, trustworthy, and ethical as the community understands its cybersecurity impacts.
Iterative ZeroShot LLM Prompting for Knowledge Graph Construction ; In the current digitalization era, capturing and effectively representing knowledge is crucial in most realworld scenarios. In this context, knowledge graphs represent a potent tool for retrieving and organizing a vast amount of information in a properly interconnected and interpretable structure. However, their generation is still challenging and often requires considerable human effort and domain expertise, hampering the scalability and flexibility across different application fields. This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models, such as GPT3.5, that can address all the main critical issues in knowledge graph building. The approach is conveyed in a pipeline that comprises novel iterative zeroshot and external knowledgeagnostic strategies in the main stages of the generation process. Our unique manifold approach may encompass significant benefits to the scientific community. In particular, the main contribution can be summarized by i an innovative strategy for iteratively prompting large language models to extract relevant components of the final graph; ii a zeroshot strategy for each prompt, meaning that there is no need for providing examples for guiding the prompt result; iii a scalable solution, as the adoption of LLMs avoids the need for any external resources or human expertise. To assess the effectiveness of our proposed model, we performed experiments on a dataset that covered a specific domain. We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.
Generative Adversarial Networks for Dental Patient Identity Protection in Orthodontic Educational Imaging ; Objectives This research introduces a novel areapreserving Generative Adversarial Networks GAN inversion technique for effectively deidentifying dental patient images. This innovative method addresses privacy concerns while preserving key dental features, thereby generating valuable resources for dental education and research. Methods We enhanced the existing GAN Inversion methodology to maximize the preservation of dental characteristics within the synthesized images. A comprehensive technical framework incorporating several deep learning models was developed to provide endtoend development guidance and practical application for image deidentification. Results Our approach was assessed with varied facial pictures, extensively used for diagnosing skeletal asymmetry and facial anomalies. Results demonstrated our model's ability to adapt the context from one image to another, maintaining compatibility, while preserving dental features essential for oral diagnosis and dental education. A panel of five clinicians conducted an evaluation on a set of original and GANprocessed images. The generated images achieved effective deidentification, maintaining the realism of important dental features and were deemed useful for dental diagnostics and education. Clinical Significance Our GAN model and the encompassing framework can streamline the deidentification process of dental patient images, enhancing efficiency in dental education. This method improves students' diagnostic capabilities by offering more exposure to orthodontic malocclusions. Furthermore, it facilitates the creation of deidentified datasets for broader 2D image research at major research institutions.
Toward a generative modeling analysis of CLAS exclusive 2 photoproduction ; AIsupported algorithms, particularly generative models, have been successfully used in a variety of different contexts. In this work, we demonstrate for the first time that generative adversarial networks GANs can be used in highenergy experimental physics to unfold detector effects from multiparticle final states, while preserving correlations between kinematic variables in multidimensional phase space. We perform a full closure test on twopion photoproduction pseudodata generated with a realistic model in the kinematics of the Jefferson Lab CLAS g11 experiment. The overlap of different reaction mechanisms leading to the same final state associated with the CLAS detector's nontrivial effects represents an ideal test case for AIsupported analysis. Uncertainty quantification performed via bootstrap provides an estimate of the systematic uncertainty associated with the procedure. The test demonstrates that GANs can reproduce highly correlated multidifferential cross sections even in the presence of detectorinduced distortions in the training datasets, and provides a solid basis for applying the framework to real experimental data.
Large Language Models as General Pattern Machines ; We observe that pretrained large language models LLMs are capable of autoregressively completing complex token sequences from arbitrary ones procedurally generated by probabilistic contextfree grammars PCFG, to more rich spatial patterns found in the Abstract Reasoning Corpus ARC, a general AI benchmark, prompted in the style of ASCII art. Surprisingly, pattern completion proficiency can be partially retained even when the sequences are expressed using tokens randomly sampled from the vocabulary. These results suggest that without any additional training, LLMs can serve as general sequence modelers, driven by incontext learning. In this work, we investigate how these zeroshot capabilities may be applied to problems in robotics from extrapolating sequences of numbers that represent states over time to complete simple motions, to leasttomost prompting of rewardconditioned trajectories that can discover and represent closedloop policies e.g., a stabilizing controller for CartPole. While difficult to deploy today for real systems due to latency, context size limitations, and compute costs, the approach of using LLMs to drive lowlevel control may provide an exciting glimpse into how the patterns among words could be transferred to actions.
Generative Contrastive Graph Learning for Recommendation ; By treating users' interactions as a useritem graph, graph learning models have been widely deployed in Collaborative FilteringCF based recommendation. Recently, researchers have introduced Graph Contrastive LearningGCL techniques into CF to alleviate the sparse supervision issue, which first constructs contrastive views by data augmentations and then provides selfsupervised signals by maximizing the mutual information between contrastive views. Despite the effectiveness, we argue that current GCLbased recommendation models are still limited as current data augmentation techniques, either structure augmentation or feature augmentation. First, structure augmentation randomly dropout nodes or edges, which is easy to destroy the intrinsic nature of the useritem graph. Second, feature augmentation imposes the same scale noise augmentation on each node, which neglects the unique characteristics of nodes on the graph. To tackle the above limitations, we propose a novel Variational Graph GenerativeContrastive LearningVGCL framework for recommendation. Specifically, we leverage variational graph reconstruction to estimate a Gaussian distribution of each node, then generate multiple contrastive views through multiple samplings from the estimated distributions, which builds a bridge between generative and contrastive learning. Besides, the estimated variances are tailored to each node, which regulates the scale of contrastive loss for each node on optimization. Considering the similarity of the estimated distributions, we propose a clusteraware twofold contrastive learning, a nodelevel to encourage consistency of a node's contrastive views and a clusterlevel to encourage consistency of nodes in a cluster. Finally, extensive experimental results on three public datasets clearly demonstrate the effectiveness of the proposed model.
PiTL Crossmodal Retrieval with Weaklysupervised Visionlanguage Pretraining via Prompting ; Visionlanguage VL Pretraining VLP has shown to well generalize VL models over a wide range of VL downstream tasks, especially for crossmodal retrieval. However, it hinges on a huge amount of imagetext pairs, which requires tedious and costly curation. On the contrary, weaklysupervised VLP WVLP explores means with object tags generated by a pretrained object detector OD from images. Yet, they still require paired information, i.e. images and objectlevel annotations, as supervision to train an OD. To further reduce the amount of supervision, we propose PromptsinTheLoop PiTL that prompts knowledge from large language models LLMs to describe images. Concretely, given a category label of an image, e.g. refinery, the knowledge, e.g. a refinery could be seen with large storage tanks, pipework, and ..., extracted by LLMs is used as the language counterpart. The knowledge supplements, e.g. the common relations among entities most likely appearing in a scene. We create IN14K, a new VL dataset of 9M images and 1M descriptions of 14K categories from ImageNet21K with PiTL. Empirically, the VL models pretrained with PiTLgenerated pairs are strongly favored over other WVLP works on imagetotext I2T and texttoimage T2I retrieval tasks, with less supervision. The results reveal the effectiveness of PiTLgenerated pairs for VLP.
Improving Multimodal Datasets with Image Captioning ; Massive web datasets play a key role in the success of large visionlanguage models like CLIP and Flamingo. However, the raw web data is noisy, and existing filtering methods to reduce noise often come at the expense of data diversity. Our work focuses on caption quality as one major source of noise, and studies how generated captions can increase the utility of webscraped datapoints with nondescript text. Through exploring different mixing strategies for raw and generated captions, we outperform the best filtering method proposed by the DataComp benchmark by 2 on ImageNet and 4 on average across 38 tasks, given a candidate pool of 128M imagetext pairs. Our best approach is also 2x better at Flickr and MSCOCO retrieval. We then analyze what makes synthetic captions an effective source of text supervision. In experimenting with different image captioning models, we also demonstrate that the performance of a model on standard image captioning benchmarks e.g., NoCaps CIDEr is not a reliable indicator of the utility of the captions it generates for multimodal training. Finally, our experiments with using generated captions at DataComp's large scale 1.28B imagetext pairs offer insights into the limitations of synthetic text, as well as the importance of image curation with increasing training data quantity.
The fate of Galilean relativity in minimallength theories ; A number of arguments at the interplay of general relativity and quantum theory suggest an operational limit to spatial resolution, conventionally modelled as a generalized uncertainty principle GUP. Recently, it has been demonstrated that the dynamics postulated as a part of these models are only loosely related to the existence of the minimallength scale. In this paper, we intend to make a more informed choice on the Hamiltonian by demanding, among other properties, that the model be invariant under possibly deformed Galilean transformations in one dimension. In this vein, we study a twoparticle system with general interaction potential under the condition that the composition as well as the action of Galilean boosts on wave numbers be deformed so as to comply with the cutoff. We find that the customary GUPHamiltonian does not allow for invariance under any kind of generalised Galilean transformations. Those Hamiltonians which allow for a deformed relativity principle have to be related to the ordinary Galilean ones by virtue of a momentumspace diffeomorphism, i.e. a canonical transformation. Far from being trivial, the resulting dynamics is deformed, as we show at the example of the harmonic interaction.
FedMEKT Distillationbased Embedding Knowledge Transfer for Multimodal Federated Learning ; Federated learning FL enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works simply propose typical FL systems for singlemodal data, thus limiting its potential on exploiting valuable multimodal data for future personalized applications. Furthermore, the majority of FL approaches still rely on the labeled data at the client side, which is limited in realworld applications due to the inability of selfannotation from users. In light of these limitations, we propose a novel multimodal FL framework that employs a semisupervised learning approach to leverage the representations from different modalities. Bringing this concept into a system, we develop a distillationbased multimodal embedding knowledge transfer mechanism, namely FedMEKT, which allows the server and clients to exchange the joint knowledge of their learning models extracted from a small multimodal proxy dataset. Our FedMEKT iteratively updates the generalized global encoders with the joint embedding knowledge from the participating clients. Thereby, to address the modality discrepancy and labeled data constraint in existing FL systems, our proposed FedMEKT comprises local multimodal autoencoder learning, generalized multimodal autoencoder construction, and generalized classifier learning. Through extensive experiments on three multimodal human activity recognition datasets, we demonstrate that FedMEKT achieves superior global encoder performance on linear evaluation and guarantees user privacy for personal data and model parameters while demanding less communication cost than other baselines.
Composite Diffusion whole parts ; For an artist or a graphic designer, the spatial layout of a scene is a critical design choice. However, existing texttoimage diffusion models provide limited support for incorporating spatial information. This paper introduces Composite Diffusion as a means for artists to generate highquality images by composing from the subscenes. The artists can specify the arrangement of these subscenes through a flexible freeform segment layout. They can describe the content of each subscene primarily using natural text and additionally by utilizing reference images or control inputs such as line art, scribbles, human pose, canny edges, and more. We provide a comprehensive and modular method for Composite Diffusion that enables alternative ways of generating, composing, and harmonizing subscenes. Further, we wish to evaluate the composite image for effectiveness in both image quality and achieving the artist's intent. We argue that existing image quality metrics lack a holistic evaluation of image composites. To address this, we propose novel quality criteria especially relevant to composite generation. We believe that our approach provides an intuitive method of art creation. Through extensive user surveys, quantitative and qualitative analysis, we show how it achieves greater spatial, semantic, and creative control over image generation. In addition, our methods do not need to retrain or modify the architecture of the base diffusion models and can work in a plugandplay manner with the finetuned models.
RPGPalm Realistic Pseudodata Generation for Palmprint Recognition ; Palmprint recently shows great potential in recognition applications as it is a privacyfriendly and stable biometric. However, the lack of largescale public palmprint datasets limits further research and development of palmprint recognition. In this paper, we propose a novel realistic pseudopalmprint generation RPG model to synthesize palmprints with massive identities. We first introduce a conditional modulation generator to improve the intraclass diversity. Then an identityaware loss is proposed to ensure identity consistency against unpaired training. We further improve the B'ezier palm creases generation strategy to guarantee identity independence. Extensive experimental results demonstrate that synthetic pretraining significantly boosts the recognition model performance. For example, our model improves the stateoftheart B'ezierPalm by more than 5 and 14 in terms of TARFAR1e6 under the 11 and 13 Openset protocol. When accessing only 10 of the real training data, our method still outperforms ArcFace with 100 real training data, indicating that we are closer to realdatafree palmprint recognition.
Image Synthesis under Limited Data A Survey and Taxonomy ; Deep generative models, which target reproducing the given data distribution to produce novel samples, have made unprecedented advancements in recent years. Their technical breakthroughs have enabled unparalleled quality in the synthesis of visual content. However, one critical prerequisite for their tremendous success is the availability of a sufficient number of training samples, which requires massive computation resources. When trained on limited data, generative models tend to suffer from severe performance deterioration due to overfitting and memorization. Accordingly, researchers have devoted considerable attention to develop novel models that are capable of generating plausible and diverse images from limited training data recently. Despite numerous efforts to enhance training stability and synthesis quality in the limited data scenarios, there is a lack of a systematic survey that provides 1 a clear problem definition, critical challenges, and taxonomy of various tasks; 2 an indepth analysis on the pros, cons, and remain limitations of existing literature; as well as 3 a thorough discussion on the potential applications and future directions in the field of image synthesis under limited data. In order to fill this gap and provide a informative introduction to researchers who are new to this topic, this survey offers a comprehensive review and a novel taxonomy on the development of image synthesis under limited data. In particular, it covers the problem definition, requirements, main solutions, popular benchmarks, and remain challenges in a comprehensive and allaround manner.
Generative Modelling of Levy Area for High Order SDE Simulation ; It is well known that, when numerically simulating solutions to SDEs, achieving a strong convergence rate better than Osqrth where h is the step size requires the use of certain iterated integrals of Brownian motion, commonly referred to as its L'evy areas. However, these stochastic integrals are difficult to simulate due to their nonGaussian nature and for a ddimensional Brownian motion with d 2, no fast almostexact sampling algorithm is known. In this paper, we propose L'evyGAN, a deeplearningbased model for generating approximate samples of L'evy area conditional on a Brownian increment. Due to our Bridgeflipping operation, the output samples match all joint and conditional odd moments exactly. Our generator employs a tailored GNNinspired architecture, which enforces the correct dependency structure between the output distribution and the conditioning variable. Furthermore, we incorporate a mathematically principled characteristicfunction based discriminator. Lastly, we introduce a novel training mechanism termed Chentraining, which circumvents the need for expensivetogenerate training datasets. This new training procedure is underpinned by our two main theoretical results. For 4dimensional Brownian motion, we show that L'evyGAN exhibits stateoftheart performance across several metrics which measure both the joint and marginal distributions. We conclude with a numerical experiment on the logHeston model, a popular SDE in mathematical finance, demonstrating that highquality synthetic L'evy area can lead to high order weak convergence and variance reduction when using multilevel Monte Carlo MLMC.
OmniDataComposer A Unified Data Structure for Multimodal Data Fusion and Infinite Data Generation ; This paper presents OmniDataComposer, an innovative approach for multimodal data fusion and unlimited data generation with an intent to refine and uncomplicate interplay among diverse data modalities. Coming to the core breakthrough, it introduces a cohesive data structure proficient in processing and merging multimodal data inputs, which include video, audio, and text. Our crafted algorithm leverages advancements across multiple operations such as videoimage caption extraction, dense caption extraction, Automatic Speech Recognition ASR, Optical Character Recognition OCR, Recognize Anything ModelRAM, and object tracking. OmniDataComposer is capable of identifying over 6400 categories of objects, substantially broadening the spectrum of visual information. It amalgamates these diverse modalities, promoting reciprocal enhancement among modalities and facilitating crossmodal data correction. textbfThe final output metamorphoses each video input into an elaborate sequential document, virtually transmuting videos into thorough narratives, making them easier to be processed by large language models. Future prospects include optimizing datasets for each modality to encourage unlimited data generation. This robust base will offer priceless insights to models like ChatGPT, enabling them to create higher quality datasets for video captioning and easing questionanswering tasks based on video content. OmniDataComposer inaugurates a new stage in multimodal learning, imparting enormous potential for augmenting AI's understanding and generation of complex, realworld data.
FLORAH A generative model for halo assembly histories ; The mass assembly history MAH of dark matter halos plays a crucial role in shaping the formation and evolution of galaxies. MAHs are used extensively in semianalytic and empirical models of galaxy formation, yet current analytic methods to generate them are inaccurate and unable to capture their relationship with the halo internal structure and largescale environment. This paper introduces FLORAH, a machinelearning framework for generating assembly histories of ensembles of dark matter halos. We train FLORAH on the assembly histories from the GUREFT and VSMDPL Nbody simulations and demonstrate its ability to recover key properties such as the time evolution of mass and concentration. We obtain similar results for the galaxy stellar mass versus halo mass relation and its residuals when we run the Santa Cruz semianalytic model on FLORAHgenerated assembly histories and halo formation histories extracted from an Nbody simulation. We further show that FLORAH also reproduces the dependence of clustering on properties other than mass assembly bias, which is not captured by other analytic methods. By combining multiple networks trained on a suite of simulations with different redshift ranges and mass resolutions, we are able to construct accurate main progenitor branches MPBs with a wide dynamic mass range from z 0 up to an ultrahigh redshift z approx 20, currently far beyond that of a single Nbody simulation. FLORAH is the first step towards a machine learningbased framework for planting full merger trees; this will enable the exploration of different galaxy formation scenarios with great computational efficiency at unprecedented accuracy.
Generating Transferable and Stealthy Adversarial Patch via Attentionguided Adversarial Inpainting ; Adversarial patch attacks can fool the face recognition FR models via small patches. However, previous adversarial patch attacks often result in unnatural patterns that are easily noticeable. Generating transferable and stealthy adversarial patches that can efficiently deceive the blackbox FR models while having good camouflage is challenging because of the huge stylistic difference between the source and target images. To generate transferable, naturallooking, and stealthy adversarial patches, we propose an innovative twostage attack called AdvInpainting, which extracts style features and identity features from the attacker and target faces, respectively and then fills the patches with misleading and inconspicuous content guided by attention maps. In the first stage, we extract multiscale style embeddings by a pyramidlike network and identity embeddings by a pretrained FR model and propose a novel Attentionguided Adaptive Instance Normalization layer AAIN to merge them via backgroundpatch crossattention maps. The proposed layer can adaptively fuse identity and style embeddings by fully exploiting priority contextual information. In the second stage, we design an Adversarial Patch Refinement Network APRNet with a novel boundary variance loss, a spatial discounted reconstruction loss, and a perceptual loss to boost the stealthiness further. Experiments demonstrate that our attack can generate adversarial patches with improved visual quality, better stealthiness, and stronger transferability than stateoftheart adversarial patch attacks and semantic attacks.
Extension of the Bayesian searches for anisotropic stochastic gravitationalwave background with nontensorial polarizations ; The recent announcement of strong evidence for a stochastic gravitationalwave background SGWB by various pulsar timing array collaborations has highlighted this signal as a promising candidate for future observations. Despite its nondetection by groundbased detectors such as Advanced LIGO and Advanced Virgo, Callister textitet al.citetomnongrmethod developed a Bayesian formalism to search for an isotropic SGWB with nontensorial polarizations, imposing constraints on signal amplitude in those components that violate general relativity using LIGO's data. Since our ultimate aim is to estimate the spatial distribution of gravitationalwave sources, we have extended this existing method to allow for anisotropic components in signal models. We then examined the potential benefits from including these additional components. Using injection campaigns, we found that introducing anisotropic components into a signal model led to more significant identification of the signal itself and violations of general relativity. Moreover, the results of our Bayesian parameter estimation suggested that anisotropic components aid in breaking down degeneracies between different polarization components, allowing us to infer model parameters more precisely than through an isotropic analysis. In contrast, constraints on signal amplitude remained comparable in the absence of such a signal. Although these results might depend on the assumed source distribution on the sky, such as the Galactic plane, the formalism presented in this work has laid a foundation for establishing a generalized Bayesian analysis for an SGWB, including its anisotropies and nontensorial polarizations.
Hyperbolic Face AntiSpoofing ; Learning generalized face antispoofing FAS models against presentation attacks is essential for the security of face recognition systems. Previous FAS methods usually encourage models to extract discriminative features, of which the distances within the same class bonafide or attack are pushed close while those between bonafide and attack are pulled away. However, these methods are designed based on Euclidean distance, which lacks generalization ability for unseen attack detection due to poor hierarchy embedding ability. According to the evidence that different spoofing attacks are intrinsically hierarchical, we propose to learn richer hierarchical and discriminative spoofing cues in hyperbolic space. Specifically, for unimodal FAS learning, the feature embeddings are projected into the Poincar'e ball, and then the hyperbolic binary logistic regression layer is cascaded for classification. To further improve generalization, we conduct hyperbolic contrastive learning for the bonafide only while relaxing the constraints on diverse spoofing attacks. To alleviate the vanishing gradient problem in hyperbolic space, a new feature clipping method is proposed to enhance the training stability of hyperbolic models. Besides, we further design a multimodal FAS framework with Euclidean multimodal feature decomposition and hyperbolic multimodal feature fusion classification. Extensive experiments on three benchmark datasets i.e., WMCA, PADISIFace, and SiWM with diverse attack types demonstrate that the proposed method can bring significant improvement compared to the Euclidean baselines on unseen attack detection. In addition, the proposed framework is also generalized well on four benchmark datasets i.e., MSUMFSD, IDIAP REPLAYATTACK, CASIAFASD, and OULUNPU with a limited number of attack types.
Random Word Data Augmentation with CLIP for ZeroShot Anomaly Detection ; This paper presents a novel method that leverages a visuallanguage model, CLIP, as a data source for zeroshot anomaly detection. Tremendous efforts have been put towards developing anomaly detectors due to their potential industrial applications. Considering the difficulty in acquiring various anomalous samples for training, most existing methods train models with only normal samples and measure discrepancies from the distribution of normal samples during inference, which requires training a model for each object category. The problem of this inefficient training requirement has been tackled by designing a CLIPbased anomaly detector that applies promptguided classification to each part of an image in a sliding window manner. However, the method still suffers from the labor of careful prompt ensembling with known object categories. To overcome the issues above, we propose leveraging CLIP as a data source for training. Our method generates text embeddings with the text encoder in CLIP with typical prompts that include words of normal and anomaly. In addition to these words, we insert several randomly generated words into prompts, which enables the encoder to generate a diverse set of normal and anomalous samples. Using the generated embeddings as training data, a feedforward neural network learns to extract features of normal and anomaly from CLIP's embeddings, and as a result, a categoryagnostic anomaly detector can be obtained without any training images. Experimental results demonstrate that our method achieves stateoftheart performance without laborious prompt ensembling in zeroshot setups.
DiffusionVMR Diffusion Model for Video Moment Retrieval ; Video moment retrieval is a fundamental visuallanguage task that aims to retrieve target moments from an untrimmed video based on a language query. Existing methods typically generate numerous proposals manually or via generative networks in advance as the support set for retrieval, which is not only inflexible but also timeconsuming. Inspired by the success of diffusion models on object detection, this work aims at reformulating video moment retrieval as a denoising generation process to get rid of the inflexible and timeconsuming proposal generation. To this end, we propose a novel proposalfree framework, namely DiffusionVMR, which directly samples random spans from noise as candidates and introduces denoising learning to ground target moments. During training, Gaussian noise is added to the real moments, and the model is trained to learn how to reverse this process. In inference, a set of time spans is progressively refined from the initial noise to the final output. Notably, the training and inference of DiffusionVMR are decoupled, and an arbitrary number of random spans can be used in inference without being consistent with the training phase. Extensive experiments conducted on three widelyused benchmarks i.e., QVHighlight, CharadesSTA, and TACoS demonstrate the effectiveness of the proposed DiffusionVMR by comparing it with stateoftheart methods.
Resolvent Estimates for Viscoelastic Systems of Extended Maxwell Type and their Applications ; In the theory of viscoelasticity, an important class of models admits a representation in terms of springs and dashpots. Widely used members of this class are the Maxwell model and its extended version. This paper concerns resolvent estimates for the system of equations for the anisotropic, extended Maxwell model, abbreviated as the EMM, and its marginal realization which includes an inertia term; special attention is paid to the introduction of augmented variables. This leads to the augmented system that will also be referred to as the original system. A reduced system is then formed which encodes essentially the EMM; it is a closed system with respect to the particle velocity and the difference between the elastic and viscous strains. Based on resolvent estimates, it is shown that the original and reduced systems generate C0groups and the reduced system generates a C0semigroup of contraction. Naturally, the EMM can be written in integrodifferential form leading explicitly to relaxation and a viscoelastic integrodifferential system. However, there is a difference between the original and integrodifferential systems, in general, with consequences for whether their solutions generate semigroups or not. Finally, an energy estimate is obtained for the reduced system, and it is proven that its solutions decay exponentially as time tends to infinity. The limiting amplitude principle follows readily from these two results.
What can we learn from quantum convolutional neural networks ; We can learn from analyzing quantum convolutional neural networks QCNNs that 1 working with quantum data can be perceived as embedding physical system parameters through a hidden feature map; 2 their high performance for quantum phase recognition can be attributed to generation of a very suitable basis set during the ground state embedding, where quantum criticality of spin models leads to basis functions with rapidly changing features; 3 pooling layers of QCNNs are responsible for picking those basis functions that can contribute to forming a highperforming decision boundary, and the learning process corresponds to adapting the measurement such that fewqubit operators are mapped to fullregister observables; 4 generalization of QCNN models strongly depends on the embedding type, and that rotationbased feature maps with the Fourier basis require careful feature engineering; 5 accuracy and generalization of QCNNs with readout based on a limited number of shots favor the ground state embeddings and associated physicsinformed models. We demonstrate these points in simulation, where our results shed light on classification for physical processes, relevant for applications in sensing. Finally, we show that QCNNs with properly chosen ground state embeddings can be used for fluid dynamics problems, expressing shock wave solutions with good generalization and proven trainability.
Metric Learning for Projections Bias of Generalized Zeroshot Learning ; Generalized zeroshot learning models GZSL aim to recognize samples from seen or unseen classes using only samples from seen classes as training data. During inference, GZSL methods are often biased towards seen classes due to the visibility of seen class samples during training. Most current GZSL methods try to learn an accurate projection function from visual space to semantic space to avoid bias and ensure the effectiveness of GZSL methods. However, during inference, the computation of distance will be important when we classify the projection of any sample into its nearest class since we may learn a biased projection function in the model. In our work, we attempt to learn a parameterized Mahalanobis distance within the framework of VAEGAN Variational Autoencoder Generative Adversarial Networks, where the weight matrix depends on the network's output. In particular, we improved the network structure of VAEGAN to leverage the discriminative models of two branches to separately predict the seen samples and the unseen samples generated by this seen one. We proposed a new loss function with two branches to help us learn the optimized Mahalanobis distance representation. Comprehensive evaluation benchmarks on four datasets demonstrate the superiority of our method over the stateoftheart counterparts. Our codes are available at httpsanonymous.4open.sciencer111hxr.
Adapting SelfSupervised Representations to MultiDomain Setups ; Current stateoftheart selfsupervised approaches, are effective when trained on individual domains but show limited generalization on unseen domains. We observe that these models poorly generalize even when trained on a mixture of domains, making them unsuitable to be deployed under diverse realworld setups. We therefore propose a generalpurpose, lightweight Domain Disentanglement Module DDM that can be plugged into any selfsupervised encoder to effectively perform representation learning on multiple, diverse domains with or without shared classes. During pretraining according to a selfsupervised loss, DDM enforces a disentanglement in the representation space by splitting it into a domainvariant and a domaininvariant portion. When domain labels are not available, DDM uses a robust clustering approach to discover pseudodomains. We show that pretraining with DDM can show up to 3.5 improvement in linear probing accuracy on stateoftheart selfsupervised models including SimCLR, MoCo, BYOL, DINO, SimSiam and Barlow Twins on multidomain benchmarks including PACS, DomainNet and WILDS. Models trained with DDM show significantly improved generalization 7.4 to unseen domains compared to baselines. Therefore, DDM can efficiently adapt selfsupervised encoders to provide highquality, generalizable representations for diverse multidomain data.
Multicontinuum homogenization. General theory and applications ; In this paper, we discuss a general framework for multicontinuum homogenization. Multicontinuum models are widely used in many applications and some derivations for these models are established. In these models, several macroscopic variables at each macroscale point are defined and the resulting multicontinuum equations are formulated. In this paper, we propose a general formulation and associated ingredients that allow performing multicontinuum homogenization. Our derivation consists of several main parts. In the first part, we propose a general expansion, where the solution is expressed via the product of multiple macro variables and associated cell problems. The second part consists of formulating the cell problems. The cell problems are formulated as saddle point problems with constraints for each continua. Defining the continua via test functions, we set the constraints as an integral representation. Finally, substituting the expansion to the original system, we obtain multicontinuum systems. We present an application to the mixed formulation of elliptic equations. This is a challenging system as the system does not have symmetry. We discuss the local problems and various macroscale representations for the solution and its gradient. Using various order approximations, one can obtain different systems of equations. We discuss the applicability of multicontinuum homogenization and relate this to high contrast in the cell problem. Numerical results are presented.
Scalable Labelefficient Footpath Network Generation Using Remote Sensing Data and Selfsupervised Learning ; Footpath mapping, modeling, and analysis can provide important geospatial insights to many fields of study, including transport, health, environment and urban planning. The availability of robust Geographic Information System GIS layers can benefit the management of infrastructure inventories, especially at local government level with urban planners responsible for the deployment and maintenance of such infrastructure. However, many cities still lack realtime information on the location, connectivity, and width of footpaths, andor employ costly and manual survey means to gather this information. This work designs and implements an automatic pipeline for generating footpath networks based on remote sensing images using machine learning models. The annotation of segmentation tasks, especially labeling remote sensing images with specialized requirements, is very expensive, so we aim to introduce a pipeline requiring less labeled data. Considering supervised methods require large amounts of training data, we use a selfsupervised method for feature representation learning to reduce annotation requirements. Then the pretrained model is used as the encoder of the UNet for footpath segmentation. Based on the generated masks, the footpath polygons are extracted and converted to footpath networks which can be loaded and visualized by geographic information systems conveniently. Validation results indicate considerable consistency when compared to manually collected GIS layers. The footpath network generation pipeline proposed in this work is lowcost and extensible, and it can be applied where remote sensing images are available. Github httpsgithub.comWennyXYFootpathSeg.
Augmenting Tactile Simulators with Reallike and ZeroShot Capabilities ; Simulating tactile perception could potentially leverage the learning capabilities of robotic systems in manipulation tasks. However, the reality gap of simulators for highresolution tactile sensors remains large. Models trained on simulated data often fail in zeroshot inference and require finetuning with real data. In addition, work on highresolution sensors commonly focus on ones with flat surfaces while 3D round sensors are essential for dexterous manipulation. In this paper, we propose a bidirectional Generative Adversarial Network GAN termed SightGAN. SightGAN relies on the early CycleGAN while including two additional loss components aimed to accurately reconstruct background and contact patterns including small contact traces. The proposed SightGAN learns realtosim and simtoreal processes over difference images. It is shown to generate reallike synthetic images while maintaining accurate contact positioning. The generated images can be used to train zeroshot models for newly fabricated sensors. Consequently, the resulted simtoreal generator could be built on top of the tactile simulator to provide a realworld framework. Potentially, the framework can be used to train, for instance, reinforcement learning policies of manipulation tasks. The proposed model is verified in extensive experiments with test data collected from real sensors and also shown to maintain embedded force information within the tactile images.
MelodyGLM Multitask Pretraining for Symbolic Melody Generation ; Pretrained language models have achieved impressive results in various music understanding and generation tasks. However, existing pretraining methods for symbolic melody generation struggle to capture multiscale, multidimensional structural information in note sequences, due to the domain knowledge discrepancy between text and music. Moreover, the lack of available largescale symbolic melody datasets limits the pretraining improvement. In this paper, we propose MelodyGLM, a multitask pretraining framework for generating melodies with longterm structure. We design the melodic ngram and long span sampling strategies to create local and global blank infilling tasks for modeling the local and global structures in melodies. Specifically, we incorporate pitch ngrams, rhythm ngrams, and their combined ngrams into the melodic ngram blank infilling tasks for modeling the multidimensional structures in melodies. To this end, we have constructed a largescale symbolic melody dataset, MelodyNet, containing more than 0.4 million melody pieces. MelodyNet is utilized for largescale pretraining and domainspecific ngram lexicon construction. Both subjective and objective evaluations demonstrate that MelodyGLM surpasses the standard and previous pretraining methods. In particular, subjective evaluations show that, on the melody continuation task, MelodyGLM gains average improvements of 0.82, 0.87, 0.78, and 0.94 in consistency, rhythmicity, structure, and overall quality, respectively. Notably, MelodyGLM nearly matches the quality of humancomposed melodies on the melody inpainting task.
STANCEC3 Domainadaptive Crosstarget Stance Detection via Contrastive Learning and Counterfactual Generation ; Stance detection is the process of inferring a person's position or standpoint on a specific issue to deduce prevailing perceptions toward topics of general or controversial interest, such as health policies during the COVID19 pandemic. Existing models for stance detection are trained to perform well for a single domain e.g., COVID19 and a specific target topic e.g., masking protocols, but are generally ineffectual in other domains or targets due to distributional shifts in the data. However, constructing highperforming, domainspecific stance detection models requires an extensive corpus of labeled data relevant to the targeted domain, yet such datasets are not readily available. This poses a challenge as the process of annotating data is costly and timeconsuming. To address these challenges, we introduce a novel stance detection model coined domainadaptive Crosstarget STANCE detection via Contrastive learning and Counterfactual generation STANCEC3 that uses counterfactual data augmentation to enhance domainadaptive training by enriching the target domain dataset during the training process and requiring significantly less information from the new domain. We also propose a modified selfsupervised contrastive learning as a component of STANCEC3 to prevent overfitting for the existing domain and target and enable crosstarget stance detection. Through experiments on various datasets, we show that STANCEC3 shows performance improvement over existing stateoftheart methods.
Observing gravitational redshift with XRay emission in galaxy clusters with Athena XIFU ; Context. The Doppler shift predicted by general relativity for light escaping a gravitational potential has been observed on Earth as well as in the direction of various stars and galaxy clusters at optical wavelengths. Aims. Observing the gravitational redshift in the Xray band within galaxy clusters could provide information on their properties and, in particular, their gravitational potential. We present a feasibility study of such a measurement, using the capabilities of the nextgeneration European Xray observatory Athena. Methods. We used a simple generalized NavarroFrenkWhite potential model along with a betamodel for the density of baryonic matter, which sets the emission to provide an estimation of the observed redshift in the simplest of cases. We generated mock observations with the Athena Xray Integral Field Unit XIFU for a nearby massive cluster, while seeking to recover the gravitational redshift along with other properties of the toy model cluster. Results. We investigated the observability of the gravitational redshift in an idealized test case of a nearby massive cluster with the Athena XIFU instrument, as well as its use in probing the properties of the potential well. We were also able to constrain the mass to a 20 level of precision and the cosmological redshift to less than 1, within a simplified and idealized observational framework. More refined simulations accounting for further effects such as the internal gas motions and the actual shape of the potential well are required to fully investigate the feasibility of measuring the gravitational redshift for a single target or statistically over a sample of galaxy clusters.
Beyond Reverse KL Generalizing Direct Preference Optimization with Diverse Divergence Constraints ; The increasing capabilities of large language models LLMs raise opportunities for artificial general intelligence but concurrently amplify safety concerns, such as potential misuse of AI systems, necessitating effective AI alignment. Reinforcement Learning from Human Feedback RLHF has emerged as a promising pathway towards AI alignment but brings forth challenges due to its complexity and dependence on a separate reward model. Direct Preference Optimization DPO has been proposed as an alternative, and it remains equivalent to RLHF under the reverse KL regularization constraint. This paper presents fDPO, a generalized approach to DPO by incorporating diverse divergence constraints. We show that under certain fdivergences, including JensenShannon divergence, forward KL divergences and alphadivergences, the complex relationship between the reward and optimal policy can also be simplified by addressing the KarushKuhnTucker conditions. This eliminates the need for estimating the normalizing constant in the BradleyTerry model and enables a tractable mapping between the reward function and the optimal policy. Our approach optimizes LLMs to align with human preferences in a more efficient and supervised manner under a broad set of divergence constraints. Empirically, adopting these divergences ensures a balance between alignment performance and generation diversity. Importantly, fDPO outperforms PPObased methods in divergence efficiency, and divergence constraints directly influence expected calibration error ECE.
A Hierarchical Graphbased Approach for Recognition and Description Generation of Bimanual Actions in Videos ; Nuanced understanding and the generation of detailed descriptive content for bimanual manipulation actions in videos is important for disciplines such as robotics, humancomputer interaction, and video content analysis. This study describes a novel method, integrating graph based modeling with layered hierarchical attention mechanisms, resulting in higher precision and better comprehensiveness of video descriptions. To achieve this, we encode, first, the spatiotemporal inter dependencies between objects and actions with scene graphs and we combine this, in a second step, with a novel 3level architecture creating a hierarchical attention mechanism using Graph Attention Networks GATs. The 3level GAT architecture allows recognizing local, but also global contextual elements. This way several descriptions with different semantic complexity can be generated in parallel for the same video clip, enhancing the discriminative accuracy of action recognition and action description. The performance of our approach is empirically tested using several 2D and 3D datasets. By comparing our method to the state of the art we consistently obtain better performance concerning accuracy, precision, and contextual relevance when evaluating action recognition as well as description generation. In a large set of ablation experiments we also assess the role of the different components of our model. With our multilevel approach the system obtains different semantic description depths, often observed in descriptions made by different people, too. Furthermore, better insight into bimanual handobject interactions as achieved by our model may portend advancements in the field of robotics, enabling the emulation of intricate human actions with heightened precision.
GPTDriver Learning to Drive with GPT ; We present a simple yet effective approach that can transform the OpenAI GPT3.5 model into a reliable motion planner for autonomous vehicles. Motion planning is a core challenge in autonomous driving, aiming to plan a driving trajectory that is safe and comfortable. Existing motion planners predominantly leverage heuristic methods to forecast driving trajectories, yet these approaches demonstrate insufficient generalization capabilities in the face of novel and unseen driving scenarios. In this paper, we propose a novel approach to motion planning that capitalizes on the strong reasoning capabilities and generalization potential inherent to Large Language Models LLMs. The fundamental insight of our approach is the reformulation of motion planning as a language modeling problem, a perspective not previously explored. Specifically, we represent the planner inputs and outputs as language tokens, and leverage the LLM to generate driving trajectories through a language description of coordinate positions. Furthermore, we propose a novel promptingreasoningfinetuning strategy to stimulate the numerical reasoning potential of the LLM. With this strategy, the LLM can describe highly precise trajectory coordinates and also its internal decisionmaking process in natural language. We evaluate our approach on the largescale nuScenes dataset, and extensive experiments substantiate the effectiveness, generalization ability, and interpretability of our GPTbased motion planner. Code will be released upon acceptance.
T3Bench Benchmarking Current Progress in Textto3D Generation ; Recent methods in textto3D leverage powerful pretrained diffusion models to optimize NeRF. Notably, these methods are able to produce highquality 3D scenes without training on 3D data. Due to the openended nature of the task, most studies evaluate their results with subjective case studies and user experiments, thereby presenting a challenge in quantitatively addressing the question How has current progress in Textto3D gone so far In this paper, we introduce T3Bench, the first comprehensive textto3D benchmark containing diverse text prompts of three increasing complexity levels that are specially designed for 3D generation. To assess both the subjective quality and the text alignment, we propose two automatic metrics based on multiview images produced by the 3D contents. The quality metric combines multiview textimage scores and regional convolution to detect quality and view inconsistency. The alignment metric uses multiview captioning and Large Language Model LLM evaluation to measure text3D consistency. Both metrics closely correlate with different dimensions of human judgments, providing a paradigm for efficiently evaluating textto3D models. The benchmarking results, shown in Fig. 1, reveal performance differences among six prevalent textto3D methods. Our analysis further highlights the common struggles for current methods on generating surroundings and multiobject scenes, as well as the bottleneck of leveraging 2D guidance for 3D generation. Our project page is available at httpst3bench.com.
Analysis and modeling of scaleinvariance in plankton abundance ; The power spectrum, S, of horizontal transects of plankton abundance are often observed to have a powerlaw dependence on wavenumber, k, with exponent close to 2 Skpropto k2 over a wide range of scales. I present power spectral analyses of aircraft lidar measurements of phytoplankton abundance from scales of 1 to 100 km. A power spectrum Skpropto k2 is obtained. As a model for this observation, I consider a stochastic growth equation where the rate of change of plankton abundance is determined by turbulent mixing, modeled as a diffusion process in two dimensions, and exponential growth with a stochastically variable net growth rate representing a fluctuating environment. The model predicts a lognormal distribution of abundance and a power spectrum of horizontal transects Skpropto k1.8, close to the observed spectrum. The model equation predicts that the power spectrum of variations in abundance in time at a point in space is Sfpropto f1.5 where f is the frequency. Time series analysis of local variations of phytoplankton and zooplankton yield a powerlaw power spectrum with exponents 1.3 and 1.2, respectively from time scales of one hour to one year. These values are roughly consistent with the model prediction of 1.5. The distribution of abundances is nearly lognormal as predicted. The model may be more generally applicable than for the spatial distribution of plankton. I relate the model predictions to observations of spatial patchiness in vegetation.
Threeintegral oblate galaxy models ; A simple numerical scheme is presented for the construction of threeintegral phasespace distribution functions for oblate galaxy models with a gravitational potential of Stackel form, and an arbitrary axisymmetric luminous density distribution. The intrinsic velocity moments can be obtained simultaneously with little extra effort. The distribution of the inner and outer turning points of the shortaxis tube orbits that are populated can be specified freely, and is chosen in advance. The entire distribution function is then derived from the density by an iterative scheme that starts from the explicitly known distribution function of the thinorbit maximum streaming model, in which only the tubes with equal inner and outer turning points are populated. The versatility and limitations of this scheme are illustrated by the construction of a number of selfconsistent threeintegral flattened isochrone models of KuzminKutuzov type, and by investigation of special cases where the scheme is tractable analytically. This includes the behaviour of the distribution functions in the outer regions of the models. The scheme converges rapidly for models containing orbits with ratios of the outer to inner turning point as large as ten, and is particularly suited for the construction of tangentially anisotropic flattened models, selfconsistent as well as nonconsistent. The algorithm simplifies in the disk and spherical limit, and can be generalized to triaxial models.
Large Scale Power Spectrum from Peculiar Velocities Via Likelihood Analysis ; The power spectrum PS of mass density fluctuations, independent of biasing', is estimated from the Mark III catalog of peculiar velocities using Bayesian statistics. A parametric model is assumed for the PS, and the free parameters are determined by maximizing the probability of the model given the data. The method has been tested using detailed mock catalogs. It has been applied to generalized CDM models with and without COBE normalization. The robust result for all the models is a relatively high PS, with Pk Omega1.2 4.8 pm 1.5 times 103 Mpch3 at k0.1 hMpc. An extrapolation to smaller scales using the different CDM models yields sigma8 Omega0.6 0.88 pm 0.15. The peak is weakly constrained to the range 0.02 leq k leq 0.06 hMpc. These results are consistent with a direct computation of the PS Kolatt Dekel 1996. When compared to galaxydensity surveys, the implied values for beta equiv Omega0.6b are of order unity to within 25. The parameters of the COBEnormalized, flat CDM model are confined by a 90 likelihood contour of the sort Omega h50mu nnu 0.8 pm 0.2, where mu 1.3 and nu 3.4, 2.0 for models with and without tensor fluctuations respectively. For open CDM the powers are mu 0.95 and nu 1.4 no tensor fluctuations. A Gammashape model free of COBE normalization yields only a weak constraint Gamma 0.4 pm 0.2.
The Hubble Diagram of Type Ia Supernovae in NonUniform Pressure Universes ; We use the redshiftmagnitude relation, as derived by Dcabrowski 1995, for the two exact nonuniform pressure spherically symmetric Stephani universes with the observer positioned at the center of symmetry, to test the agreement of these models with recent observations of high redshift type Ia supernovae SNIa, as reported in Perlmutter et al. 1997. By a particular choice of model parameters, we show that these models give an excellent fit to the observed redshifts and corrected B band apparent magnitudes of the SNIa data, but for an age of the Universe which is typically about two Gyr greater than in the corresponding Friedmann model. Based on a value of H0 sim 65 and assuming Lambda geq 0, the P97 data implies a Friedmann age of at most 13 Gyr and in fact a bestfit for q0 0.5 age of only 10 Gyr. Our Stephani models, on the other hand, can give a good fit to the P97 data with an age of up to 15 Gyr and could, therefore, significantly alleviate the conflict between recent cosmological and astrophysical age predictions. The choice of model parameters is quite robust one requires only that the nonuniform pressure parameter, a, in one of the models is negative and satisfies a lte 3 km2 s2 Mpc1. By allowing slightly larger, negative, values of a one may fine tune' the model to give an even better fit to the P97 data.
Line Emission from Stellar Winds in Active Galactic Nuclei ; This dissertation presents synthetic spectra and response functions of the red giant stellar line emission model of active galactic nuclei. Our results agree with the fundamental line emission characteristics of active galactic nuclei within the model uncertainties if the following additional assumptions are made 1 the mean stellar mass loss rates decrease with distance from the black hole, and 2 the mean ionization parameters are lower than those postulated in Kazanas 1989. For models with enhanced mass loss, the zerointensityfullwidths of the line profiles are proportional to the black hole mass to the power of 13. This scaling relation suggests that the black hole masses of NLS1s narrowline Seyfert 1s are relatively low. Models with enhanced mass loss also predict minimum linecontinuum delays that are proportional to the zerointensityfullwidths of the profiles. Because of their high column densities, these models yield triangleshaped response functions, which are not generally observed. On the other hand, models without enhanced mass loss yield linecontinuum delays that are proportional to the square root of the continuum luminosity, which is constant with empirical results. Models with high enough intercloud interstellar medium densities and a BLRocculting accretion disk have line shifts. The broadest line emission and absorption profile components of lines similar to C IV, N V, and O VI are redshifted. Conversely, the narrowest emission and absorption profile components are blueshifted in such models. These results appear to agree with observations e.g., Done Krolik 1996.
The Role of Electron Captures in Chandrasekhar Mass Models for Type Ia Supernovae ; The Chandrasekhar mass model for Type Ia Supernovae SNe Ia has received increasing support from recent comparisons of observations with light curve predictions and modeling of synthetic spectra. It explains SN Ia events via thermonuclear explosions of accreting white dwarfs in binary stellar systems, being caused by central carbon ignition when the white dwarf approaches the Chandrasekhar mass. As the electron gas in white dwarfs is degenerate, characterized by high Fermi energies for the high density regions in the center, electron capture on intermediate mass and Fegroup nuclei plays an important role in explosive burning. Electron capture affects the central electron fraction Ye, which determines the composition of the ejecta from such explosions. Up to the present, astrophysical tabulations based on shell model matrix elements were only available for light nuclei in the sdshell. Recently new Shell Model Monte Carlo SMMC and largescale shell model diagonalization calculations have also been performed for pfshell nuclei. These lead in general to a reduction of electron capture rates in comparison with previous, more phenomenological, approaches. Making use of these new shell model based rates, we present the first results for the composition of Fegroup nuclei produced in the central regions of SNe Ia and possible changes in the constraints on model parameters like ignition densities and burning front speeds.
Analytic model for galaxy and dark matter clustering ; We investigate an analytic model to compute nonlinear power spectrum of dark matter, galaxies and their crosscorrelation. The model is based on PressSchechter halos, which cluster and have realistic dark matter profiles. The total power spectrum is a sum of two contributions, one from correlations betwen the halos and one from correlations within the same halo. We show that such a model can give dark matter power spectra which match well with the results of Nbody simulations, provided that concentration parameter decreases with the halo mass. Galaxy power spectrum differs from dark matter power spectrum because pair weighted number of galaxies increases less rapidly than the halo mass, as predicted by theoretical models and observed in clusters. In this case the resulting power spectrum becomes a power law with the slope closed to the observed. Such a model also predicts a later onset of nonlinear clustering compared to the dark matter, which is needed to reconcile the CDM models with the data. Generic prediction of this model is that bias is scale dependent and nonmonotonic. For red or elliptical galaxies bias in power spectrum may be scale dependent even on very large scales. Our predictions for galaxydark matter correlations, which can be observed through the galaxygalaxy lensing, show that these cannot be interpreted simply as an average halo profile of a typical galaxy, because different halo masses dominate at different scales and because larger halos host more than one galaxy. We discuss the prospects of using crosscorrelations in combination with galaxy clustering to determine the dark matter power spectrum ABRIDGED.
What Damped Lyalpha Systems Tell Us About the Radial Distribution of Cold Gas at High Redshift ; We investigate the properties of damped Lymanalpha systems DLAS in semianalytic models, focusing on whether the models can reproduce the kinematic properties of lowionization metal lines described by Prochaska Wolfe 1997b, 1998. We explore a variety of approaches for modelling the radial distribution of the cold neutral gas associated with the galaxies in our models, and find that our results are very sensitive to this ingredient. If we use an approach based on Fall Efstathiou 1980, in which the sizes of the discs are determined by conservation of angular momentum, we find that the majority of the DLAS correspond to a single galactic disc. These models generically fail to reproduce the observed distribution of velocity widths. In alternative models in which the gas discs are considerably more extended, a significant fraction of DLAS arise from lines of sight intersecting multiple gas discs in a common halo. These models produce kinematics that fit the observational data, and also seem to agree well with the results of recent hydrodynamical simulations. Thus we conclude that Cold Dark Matter based models of galaxy formation can be reconciled with the kinematic data, but only at the expense of the standard assumption that DLAS are produced by rotationally supported gas discs whose sizes are determined by conservation of angular momentum. We suggest that the distribution of cold gas at high redshift may be dominated by another process, such as tidal streaming due to mergers.
Are NarrowLine Seyfert 1s Really Strange ; NarrowLine Seyfert 1s NLS1s are generally considered to be strange Active Galactic Nuclei AGNs. Surprisingly, this makes them very useful for constraining models. I discuss what happens when one attempts to qualitatively fit the NLS1 phenomenon using the stellar wind model for AGN line emission e.g., Kazanas 1989. The simplest way of narrowing profile bases of this model to the widths observed in NLS1s is probably to lower the mass of the supermassive black hole. In a fluxlimited and redshiftlimited data set, this is indeed similar to increasing LLEdd. Because the broad line region BLR of the stellar line emission model scales with the tidal radius of the stars, this model predicts maximal BLR velocities of FWZI proportional LLEdd13. This implies that the black holes of NLS1s are approximately 3327 times less massive than those in other Seyfert 1s if the stellar line emission model is correct. Another consequence of increasing LLEdd in this model is that it results in an increase in the wind edge densities. NLS1 spectra appear to support this result as well. Even the collateral features of NLS1s, such as the line asymmetries and continuum properties, appear to be easily explained within the context of this model. For better or worse, if the stellar wind line emission is correct, NLS1s are not much stranger than other AGNs.
The Nature of HighRedshift Galaxies ; Using semianalytic models of galaxy formation set within the Cold Dark Matter CDM merging hierarchy, we investigate several scenarios for the nature of the highredshift z ga 2 Lymanbreak galaxies LBGs. We consider a collisional starburst'' model in which bursts of star formation are triggered by galaxygalaxy mergers, and find that a significant fraction of LBGs are predicted to be starbursts. This model reproduces the observed comoving number density of bright LBGs as a function of redshift and the observed luminosity function at zsim 3 and zsim 4, with a reasonable amount of dust extinction. Model galaxies at z sim 3 have star formation rates, halflight radii, IK colours, and internal velocity dispersions that are in good agreement with the data. Global quantities such as the star formation rate density and cold gas and metal content of the Universe as a function of redshift also agree well. Two quiescent'' models without starbursts are also investigated. In one, the star formation efficiency in galaxies remains constant with redshift, while in the other, it scales inversely with disc dynamical time, and thus increases rapidly with redshift. The first quiescent model is strongly ruled out as it does not produce enough high redshift galaxies once realistic dust extinction is accounted for. The second quiescent model fits marginally, but underproduces cold gas and very bright galaxies at high redshift. A general conclusion is that star formation at high redshift must be more efficient than locally. The collisional starburst model appears to accomplish this naturally without violating other observational constraints.
Theoretical Modeling of Starburst Galaxies ; We have modeled a large sample of infrared starburst galaxies using both the PEGASE v2.0 and STARBURST99 codes to generate the spectral energy distribution of the young star clusters. PEGASE utilizes the Padova group tracks while STARBURST99 uses the Geneva group tracks, allowing comparison between the two. We used our MAPPINGS III code to compute photoionization models which include a selfconsistent treatment of dust physics and chemical depletion. We use the standard optical diagnostic diagrams as indicators of the hardness of the EUV radiation field in these galaxies. These diagnostic diagrams are most sensitive to the spectral index of the ionizing radiation field in the 14 Rydberg region. We find that warm infrared starburst galaxies contain a relatively hard EUV field in this region. The PEGASE ionizing stellar continuum is harder in the 14 Rydberg range than that of STARBURST99. As the spectrum in this regime is dominated by emission from WolfRayet WR stars, this difference is most likely due to the differences in stellar atmosphere models used for the WR stars. We believe that the stellar atmospheres in STARBURST99 are more applicable to the starburst galaxies in our sample, however they do not produce the hard EUV field in the 14 Rydberg region required by our observations. The inclusion of continuum metal blanketing in the models may be one solution. Supernova remnant SNR shock modeling shows that the contribution by mechanical energy from SNRs to the photoionization models is 20. The models presented here are used to derive a new theoretical classification scheme for starbursts and AGN galaxies based on the optical diagnostic diagrams.
Formation of the Black Hole in Nova Scorpii ; Israelian et al. 1999 showed that the stellar companion of the blackhole binary Nova Sco is polluted with material ejected in the supernova that accompanied the formation of the blackhole primary. Here we systematically investigate the implications of these observations for the blackhole formation process. Using a variety of supernova models, including both standard as well as hypernova models for different heliumstar masses, explosion energies, and explosion geometries and a simple model for the evolution of the binary and the pollution of the secondary, we show that most of the observed abundance anomalies can be explained for a large range of model parameters apart from the abundance of Ti. The best models are obtained for He star masses of 10 to 16 Msun, where spherical hypernova models are generally favoured over standard supernova ones. Aspherical hypernova models also produce acceptable fits, provided there is extensive lateral mixing. All models require substantial fallback and that the fallback material either reached the orbit of the secondary or was mixed efficiently with material that escaped. The black hole therefore formed in a twostep process, where the initial mass of the collapsed remnant was increased substantially by matter that fell back after the initial collapse. This may help to explain the high observed space velocity of Nova Sco either because of a neutrinoinduced kick if a neutron star was formed first or by asymmetric mass ejection in an asymmetric supernova explosion.
Future supernovae data and quintessence models ; The possibility to unambiguously determine the equationofstate of the cosmic dark energy with existing and future supernovae data is investigated. We consider four evolution laws for this equationofstate corresponding to four quintessential models, i.e. i a cosmological constant, ii a general barotropic fluid, iii a perfect fluid with a linear equationofstate and iv a more physical model based on a pseudoNambuGoldstone boson field. We explicitly show the degeneracies present not only within each model but also between the different models they are caused by the multiintegral relation between the equationofstate of dark energy and the luminosity distance. Present supernova observations are analysed using a standard chi2 method and the minimal chi2 values obtained for each model are compared. We confirm the difficulty to discriminate between these models using present SNeIa data only. By means of simulations, we then show that future SNAP observations will not remove all the degeneracies. For example, wrong estimations of Omegam with a good value of chi2min could be found if the right cosmological model is not used to fit the data. We finally give some probabilities to obtain unambiguous results, free from degeneracies. In particular, the probability to confuse a cosmological constant with a true barotropic fluid with an equationofstate different from 1 is shown to be 95 at a 2 sigma level.
Damped Lyman alpha systems and galaxy formation models II. High ions and Lyman limit systems ; We investigate a model for the highionization state gas associated with observed damped Lymanalpha systems, based on a semianalytic model of galaxy formation set within the paradigm of hierarchical structure formation. In our model, the hot gas in halos and subhalos gives rise to CIV absorption, while the lowionization state gas is associated with the cold gas in galaxies. The model matches the distribution of CIV column densities and leads naturally to kinematic properties that are in good agreement with the data. We examine the contribution of both hot and cold gas to subdamped systems and suggest that the properties of these systems can be used as an important test of the model. We expect that subDLA systems will generally be composed of a single gas disk and thus predict that they should have markedly different kinematics than the damped systems. Finally, we find that hot halo gas produces less than one third of Lyman limit systems at redshift three. We model the contribution of minihalos halos with virial velocities 35 kms to Lyman limit systems and find that they may contain as much gas as is observed in these systems. However, if we adopt realistic models of the gas density distribution we find that these systems are not a significant source of Lyman limit absorption. Instead we suggest that uncollapsed gas outside of virialized halos is responsible for most of the Lyman limit systems at high redshift.
Evolution of Cosmological Density Distribution Function from the Local Collapse Model ; We present a general framework to treat the evolution of onepoint probability distribution function PDF for cosmic density delta and velocitydivergence fields theta. In particular, we derive an evolution equation for the onepoint PDFs and consider the stochastic nature associated with these quantities. Under the local approximation that the evolution of cosmic fluid fields can be characterized by the Lagrangian local dynamics with finite degrees of freedom, evolution equation for PDFs becomes a closed form and consistent formal solutions are constructed. Adopting this local approximation, we explicitly evaluate the onepoint PDFs Pdelta and Ptheta from the spherical and the ellipsoidal collapse models as the representative Lagrangian local dynamics. In a Gaussian initial condition, while the local density PDF from the ellipsoidal model almost coincides with the that of the spherical model, differences between spherical and ellipsoidal collapse model are found in the velocitydivergence PDF. Importantly, the joint PDF of local density, Pdelta,t;delta',t', evaluated at the same Lagrangian position but at the different times t and t' from the ellipsoidal collapse model exhibits a large amount of scatter. The mean relation between delta and delta' does fail to match the onetoone mapping obtained from spherical collapse model. Moreover, the joint PDF Pdelta;theta from the ellipsoidal collapse model shows a similar stochastic feature, both of which are indeed consistent with the recent result from Nbody simulations.
Bayesian Analysis of the Chaplygin Gas and Cosmological Constant Models using the SNe Ia Data ; The type Ia supernovae observational data are used to estimate the parameters of a cosmological model with cold dark matter and the Chaplygin gas. The Chaplygin gas model depends essentially on four parameters the Hubble constant, the velocity of the sound of the Chaplygin gas, the curvature of the Universe and the fraction density of the Chaplygin gas and the cold dark matter. The Bayesian parameter estimation yields H0 62.13.33.4 kmMpc.s, Omegak0 0.841.511.23, Omegam0 0.00.820.0, Omegac0 1.401.151.16, barA cs2 0.930.070.21 c , t0 14.22.81.3 Gy and q0 0.981.020.62. These and other results indicate that a Universe completely dominated by the Chaplygin gas is favoured, at least as the type Ia supernovae data are concerned. A closed and accelerating Universe is also favoured. The Bayesian statistics indicates that the Chaplygin gas model is more likely than the standard cosmological constant Lambda CDM model at 55.3 confidence level when an integration on all free parameters is performed. Assuming the spatially flat curvature, this percentage mounts to 65.3. On the other hand, if the density of dark matter is fixed at zero value, the Chaplygin gas model becomes more preferred than the Lambda CDM model at 91.8 confidence level. Finally, the hypothesis of flat Universe and baryonic matter Omegab00.04 implies a Chaplygin gas model preferred over the Lambda CDM at a confidence level of 99.4.
Early Structure Formation and Reionization in a Cosmological Model with a Running Primordial Power Spectrum ; abridged We study high redshift structure formation and reionization in a LCDM universe under the assumption that the spectral power index of primordial density fluctuations is a function of length scale. We adopt a particular formulation of the running spectral index RSI model as suggested by the recent WMAP data. While early structure forms hierarchically in the RSI model, the reduced power on small scales causes a considerable delay in the formation epoch of low mass 106 Msun minihalos'' compared to the LCDM model. The extremely small number of gas clouds in the RSI model indicates that reionization is initiated later than z15, generally resulting in a smaller total Thomson optical depth than in the LCDM model. By carrying out radiative transfer calculations, we also study reionization by stellar populations formed in galaxies. Even with a topheavy intial mass function representing an early population of massive stars andor an extraordinarily high photon emission rate from galaxies, the total optical depth can only be as large as tau 0.1 for reasonable models of early starformation. The RSI model is thus in conflict with the large Thomson optical depth inferred by the WMAP satellite.
LensClean revisited ; We discuss the LensClean algorithm which for a given gravitational lens model fits a source brightness distribution to interferometric radio data in a similar way as standard Clean does in the unlensed case. The lens model parameters can then be varied in order to minimize the residuals and determine the best model for the lens mass distribution. Our variant of this method is improved in order to be useful and stable even for high dynamic range systems with nearly degenerated lens model parameters. Our test case B0218357 is dominated by two bright images but the information needed to constrain the unknown parameters is provided only by the relatively smooth and weak Einstein ring. The new variant of LensClean is able to fit lens models even in this difficult case. In order to allow the use of general mass models with LensClean, we develop the new method LenTil which inverts the lens equation much more reliably than any other method. This high reliability is essential for the use as part of LensClean. Finally a new method is developed to produce source plane maps of the unlensed source from the best LensClean brightness models. This method is based on the new concept of dirty beams in the source plane. The application to the lens B0218357 leads to the first useful constraints for the lens position and thus to a result for the Hubble constant. These results are presented in an accompanying Paper II, together with a discussion of classical lens modelling for this system.
A Measurement of the Electromagnetic Luminosity of a Kerr Black Hole ; Some active galactic nuclei, microquasars, and gamma ray bursts may be powered by the electromagnetic braking of a rapidly rotating black hole. We investigate this possibility via axisymmetric numerical simulations of a black hole surrounded by a magnetized plasma. The plasma is described by the equations of general relativistic magnetohydrodynamics, and the effects of radiation are neglected. The evolution is followed for 2000 G Mc3, and the computational domain extends from inside the event horizon to typically 40 G Mc2. We compare our results to two analytic steady state models, including the forcefree magnetosphere of Blandford Znajek. Along the way we present a selfcontained rederivation of the BlandfordZnajek model in KerrSchild horizon penetrating coordinates. We find that 1 low density polar regions of the numerical models agree well with the BlandfordZnajek model; 2 many of our models have an outward Poynting flux on the horizon in the KerrSchild frame; 3 none of our models have a net outward energy flux on the horizon; and 4 one of our models, in which the initial disk has net magnetic flux, shows a net outward angular momentum flux on the horizon. We conclude with a discussion of the limitations of our model, astrophysical implications, and problems to be addressed by future numerical experiments.
The spectral evolution of impulsive solar Xray flares. II.Comparison of observations with models ; We study the evolution of the spectral index and the normalization flux of the nonthermal component of the electron spectra observed by RHESSI during 24 solar hard Xray flares. The quantitative evolution is confronted with the predictions of simple electron acceleration models featuring the softhardsoft behaviour. The comparison is general in scope and can be applied to different acceleration models, provided that they make predictions for the behavior of the spectral index as a function of the normalization. A simple stochastic acceleration model yields plausible bestfit model parameters for about 77 of the 141 events consisting of rise and decay phases of individual hard Xray peaks. However, it implies unphysically high electron acceleration rates and total energies for the others. Other simple acceleration models such as constant rate of accelerated electrons or constant input power have a similar failure rate. The peaks inconsistent with the simple acceleration models have smaller variations in the spectral index. The cases compatible with a simple stochastic model require typically a few times 1036 electrons accelerated per second at a threshold energy of 18 keV in the rise phases and 24 keV in the decay phases of the flare peaks.
Phantom Dark Energy Models with Negative Kinetic Term ; We examine phantom dark energy models derived from a scalar field with a negative kinetic term for which Vphi approaches infinity asymptotically. All such models can be divided into three classes, corresponding to an equation of state parameter w with asymptotic behavior w 1, w w0 1, and w infinity. We derive the conditions on the potential Vphi which lead to each of these three types of behavior. For models with w 1, we derive the conditions on Vphi which determine whether or not such models produce a future big rip. Observational constraints are derived on two classes of these models powerlaw potentials with Vphi lambda phialpha with alpha positive or negative and exponential potentials of the form Vphi beta elambda phialpha. It is shown that these models spend more time in a state with Omegam Omegaphi than do corresponding models with a constant value of w, thus providing a more satisfactory solution to the coincidence problem.
fR gravity theories in Palatini formalism cosmological dynamics and observational constraints ; We make a systematic study of the cosmological dynamics for a number of fR gravity theories in Palatini formalism. We find a number of interesting results i models based on theories of the type a fRRbeta Rn and b fRRalpha ln R beta, unlike the metric formalism, are capable of producing the sequence of radiationdominated, matterdominated and deSitter periods, and ii models based on theories of the type c fRRalpha Rm beta Rn can produce early as well as late accelerating phases. However for the classes of models considered here, we have been unable to find the sequence of all four dynamical epochs required to account for the complete cosmological dynamics, even though three out of four phases are possible. We also place observational constraints on these models using the recently released supernovae data SNLS as well as the baryon acoustic oscillation peak and the CMB shift parameter. The bestfit values are found to be n0.027, alpha4.63 for the models based on a and alpha0.11, beta4.62 for the models based on b, neither of which are significantly preferred over the LCDM model. The models based on c are also consistent with the data with suitable choices of their parameters.
Models for Massive Stellar Populations with Rotation ; We present and discuss evolutionary synthesis models for massive stellar populations generated with the Starburst99 code in combination with a new set of stellar evolution models accounting for rotation. The new stellar evolution models were compiled from several data releases of the Geneva group and cover heavyelement abundances ranging from twice solar to one fifth solar. The evolution models were computed for rotation velocities on the zeroage mainsequence of 0 and 300 kms and with the latest revision of stellar massloss rates. Since the mass coverage is incomplete, in particular at nonsolar chemical composition, our parameter study is still preliminary and must be viewed as exploratory. Stellar population properties computed with Starburst99 and the new evolution models show some marked differences in comparison with models obtained using earlier tracks. Since individual stars now tend to be more luminous and bluer when on the blue side of the HertzsprungRussell diagram, the populations mirror this trend. For instance, increases by factors of two or more are found for the lighttomass ratios at ultraviolet to nearinfrared wavelengths, as well as for the output of hydrogen ionizing photons. If these results are confirmed once the evolution models have matured, recalibrations of certain starformation and initial mass function indicators will be required.
Ion Pair PotentialsofMeanForce in Water ; Recent molecular simulation and integral equation results alkalihalide ion pair potentialsofmeanforce in water are discussed. Dielectric model calculations are implemented to check that these models produce that characteristic structure of contact and solventseparated minima for oppositely charged ions in water under physiological thermodynamic conditions. Comparison of the dielectric model results with the most current molecular level information indicates that the dielectric model does not, however, provide an accurate description of these potentialsofmeanforce. We note that linear dielectric models correspond to modelistic implementations of secondorder thermodynamic perturbation theory for the excess chemical potential of a distinguished solute molecule. Therefore, the molecular theory corresponding to the dielectric models is secondorder thermodynamic perturbation theory for that excess chemical potential. The secondorder, or fluctuation, term raises a technical computational issue of treatment of longranged interactions similar to the one which arises in calculation of the dielectric constant of the solvent. It is contended that the most important step for further development of dielectric models would be a separate assessment of the firstorder perturbative term equivalently the it potential at zero charge which vanishes in the dielectric models but is generally nonzero. Parameterization of radii and molecular volumes should then be based of the secondorder perturbative term alone. Illustrative initial calculations are presented and discussed.
Learning UnificationBased Natural Language Grammars ; When parsing unrestricted language, widecovering grammars often undergenerate. Undergeneration can be tackled either by sentence correction, or by grammar correction. This thesis concentrates upon automatic grammar correction or machine learning of grammar as a solution to the problem of undergeneration. Broadly speaking, grammar correction approaches can be classified as being either it datadriven, or it modelbased. Datadriven learners use dataintensive methods to acquire grammar. They typically use grammar formalisms unsuited to the needs of practical text processing and cannot guarantee that the resulting grammar is adequate for subsequent semantic interpretation. That is, datadriven learners acquire grammars that generate strings that humans would judge to be grammatically illformed they it overgenerate and fail to assign linguistically plausible parses. Modelbased learners are knowledgeintensive and are reliant for success upon the completeness of a it model of grammaticality. But in practice, the model will be incomplete. Given that in this thesis we deal with undergeneration by learning, we hypothesise that the combined use of datadriven and modelbased learning would allow datadriven learning to compensate for modelbased learning's incompleteness, whilst modelbased learning would compensate for datadriven learning's unsoundness. We describe a system that we have used to test the hypothesis empirically. The system combines datadriven and modelbased learning to acquire unificationbased grammars that are more suitable for practical text parsing. Using the Spoken English Corpus as data, and by quantitatively measuring undergeneration, overgeneration and parse plausibility, we show that this hypothesis is correct.
Integrability and Applications of the ExactlySolvable HaldaneShastry OneDimensional Quantum Spin Chain ; Recently, the one dimensional model of N spins with Sfrac12 on a circle, interacting with an exchange that falls off with the inverse square of the separation Hrm ISE sumineq j frac1fracNpi sinfracijNpi2svecicdot svecjfrac14, or ISEmodel, has received ample attention. Its special features include relatively simple eigenfunctions, noninteracting elementary excitations that obey semionic statistics spinons, and a large quantum group'' symmetry algebra called the Yangian. This model is fully integrable, albeit in a slightly different sense than the more traditional nearest neighbor exchange NNE Heisenberg chain. This thesis comes in 4 chapters. Chapter 1 introduces the model and presents the construction of a subset of the eigenfunctions. The other eigenfunctions are shown to be generated by the action of the Yangian symmetry algebra of Hrm ISE. Chapter 2 presents a method to construct the set of constants of the motion of the ISEmodel. The ISEmodel is tractable enough to obtain its zeromagneticfield dynamical structure factors. Chapter 3 attempts to extend this to a nonzero magnetic field, where, due to the presence of spinons in the groundstate, more complicated excitations contribute small numbers of magnons and spinons. discuss the relation to the more complicated NNEmodel. Finally chapter 4 illustrates how a recently conjectured new form of Off Diagonal Long Range Order in antiferromagnetic spin chains can be reinterpreted as a spinon propagator in the ISEmodel, and verified numerically. We briefly comment on its relevance to stabilizing superconductivity in the layered cuprates.
Contrasting Dynamic Spin Susceptibility Models and their Relation to High Temperature Superconductivity ; We compare the normalstate resistivities rho and the critical temperatures Tc for superconducting dx2y2 pairing due to antiferromagnetic AF spin fluctuation exchange in the context of the two phenomenological dynamical spin susceptibility models, recently proposed by Millis, Monien, and Pines MMP and Monthoux and Pines MP and, respectively, by Radtke, Ullah, Levin, and Norman RULN, for the cuprate highTc materials. Assuming comparable electronic bandwidths and resistiviies in both models, we show that the RULN model gives a much lower dwave Tc lsim20K than the MMP model with Tcsim100K. We demonstrate that these profound differences in the Tc's arise from fundamental differences in the spectral weight distributions of the two model susceptibilities and are itnot primarily caused by differences in the calculational techniques employed by MP and RULN. The MMP model, claimed to fit NMR data in YBCO, exhibits substantial amounts of spin fluctuation spectral weight up to an imposed cutoff of 400meV, whereas, in the RULN model, claimed to fit YBCO neutron scattering data, the weight is narrowly peaked and effectively cutoff by 100meV. Further neutron scattering experiments, to explore the spectral weight distribution at all wavevectors over a sufficiently large excitation energy range, will thus be of crucial importance to resolve the question whether AF spin fluctuation exchange provides a viable mechanism to account for highTc superconductivity. The large highfrequency boson spectral weight, needed to generate both a high dwave Tc and a low normalstate resistivity, also implies large values, of order unity, for the Migdal smallness parameter, thus casting serious doubt on the validity of the very
Model Selection for Support Vector Machine Classification ; We address the problem of model selection for Support Vector Machine SVM classification. For fixed functional form of the kernel, model selection amounts to tuning kernel parameters and the slack penalty coefficient C. We begin by reviewing a recently developed probabilistic framework for SVM classification. An extension to the case of SVMs with quadratic slack penalties is given and a simple approximation for the evidence is derived, which can be used as a criterion for model selection. We also derive the exact gradients of the evidence in terms of posterior averages and describe how they can be estimated numerically using Hybrid Monte Carlo techniques. Though computationally demanding, the resulting gradient ascent algorithm is a useful baseline tool for probabilistic SVM model selection, since it can locate maxima of the exact unapproximated evidence. We then perform extensive experiments on several benchmark data sets. The aim of these experiments is to compare the performance of probabilistic model selection criteria with alternatives based on estimates of the test error, namely the socalled span estimate'' and Wahba's Generalized Approximate CrossValidation GACV error. We find that all the simple'' model criteria Laplace evidence approximations, and the Span and GACV error estimates exhibit multiple local optima with respect to the hyperparameters. While some of these give performance that is competitive with results from other approaches in the literature, a significant fraction lead to rather higher test errors. The results for the evidence gradient ascent method show that also the exact evidence exhibits local optima, but these give test errors which are much less variable and also consistently lower than for the simpler model selection criteria.
Delays, Inaccuracies and Anticipation in Microscopic Traffic Models ; We generalize a wide class of timecontinuous microscopic traffic models to include essential aspects of driver behaviour not captured by these models. Specifically, we consider i finite reaction times, ii estimation errors, iii looking several vehicles ahead spatial anticipation, and iv temporal anticipation. The estimation errors are modelled as stochastic Wiener processes and lead to timecorrelated fluctuations of the acceleration. We show that the destabilizing effects of reaction times and estimation errors can essentially be compensated for by spatial and temporal anticipation, that is, the combination of stabilizing and destabilizing effects results in the same qualitative macroscopic dynamics as that of the respectively underlying simple carfollowing model. In many cases, this justifies the use of simplified, physicsoriented models with a few parameters only. Although the qualitative dynamics is unchanged, multianticipation increase both spatial and temporal scales of stopandgo waves and other complex patterns of congested traffic in agreement with real traffic data. Remarkably, the anticipation allows accidentfree smooth driving in complex traffic situations even if reaction times exceed typical time headways.
Cluster growth in farfromequilibrium particle models with diffusion, detachment, reattachment and deposition ; Monolayer cluster growth in farfromequilibrium systems is investigated by applying simulation and analytic techniques to minimal hard core particle exclusion models. The first model I, for postdeposition coarsening dynamics, contains mechanisms of diffusion, attachment, and slow activated detachment at rate epsilon1 of particles on a line. Simulation shows three successive regimes of cluster growth fast attachment of isolated particles; detachment allowing further epsilon t13 coarsening of average cluster size; and t12 approach to a saturation size going like epsilon12. Model II generalizes the first one in having an additional mechanism of particle deposition into cluster gaps, suppressed for the smallest gaps. This model exhibits early rapid filling, leading to slowing deposition due to the increasing scarcity of deposition sites, and then continued power law epsilon t12 cluster size coarsening through the redistribution allowed by slow detachment. The basic epsilon t13 domain growth laws and epsilon12 saturation in model I are explained by a simple scaling picture. A second, fuller approach is presented which employs a mapping of cluster configurations to a column picture and an approximate factorization of the cluster configuration probability within the resulting master equation. This allows quantitative results for the saturation of model I in excellent agreement with the simulation results. For model II, it provides a onevariable scaling function solution for the coarsening probability distribution, and in particular quantitative agreement with the cluster length scaling and its amplitude.
Physics of cuprates with the twoband Hubbard model The validity of the oneband Hubbard model ; We calculate the properties of the twoband Hubbard model using the Dynamical Cluster Approximation. The phase diagram resembles the generic phase diagram of the cuprates, showing a strong asymmetry with respect to electron and hole doped regimes, in agreement with experiment. Asymmetric features are also seen in oneparticle spectral functions and in the charge, spin and dwave pairing susceptibility functions. We address the possible reduction of the twoband model to a lowenergy singleband one, as it was suggested by Zhang and Rice. Comparing the twoband Hubbard model properties with the singleband Hubbard model ones, we have found similar lowenergy physics provided that the nextnearestneighbor hopping term t' has a significant value t't approx 0.3. The parameter t' is the main culprit for the electronhole asymmetry. However, a significant value of t' cannot be provided in a strict Zhang and Rice picture where the extra holes added into the system bind to the existing Cu holes forming local singlets. We notice that by considering approximate singlet states, such as plaquette ones, reasonable values of t', which capture qualitatively the physics of the twoband model can be obtained. We conclude that a singleband tt'U Hubbard model captures the basic physics of the cuprates concerning superconductivity, antiferromagnetism, pseudogap and electronhole asymmetry, but is not suitable for a quantitative analysis or to describe physical properties involving energy scales larger than about 0.5 eV.
Classdesc and Graphcode support for scientific programming in C ; Objectoriented programming languages such as Java and Objective C have become popular for implementing agentbased and other objectbased simulations since objects in those languages can em reflect i.e. make runtime queries of an object's structure. This allows, for example, a fairly trivial em serialisation routine conversion of an object into a binary representation that can be stored or passed over a network to be written. However C does not offer this ability, as type information is thrown away at compile time. Yet C is often a preferred development environment, whether for performance reasons or for its expressive features such as operator overloading. In scientific coding, changes to a model's codes takes place constantly, as the model is refined, and different phenomena are studied. Yet traditionally, facilities such as checkpointing, routines for initialising model parameters and analysis of model output depend on the underlying model remaining static, otherwise each time a model is modified, a whole slew of supporting routines needs to be changed to reflect the new data structures. Reflection offers the advantage of the simulation framework adapting to the underlying model without programmer intervention, reducing the effort of modifying the model. In this paper, we present the em Classdesc system which brings many of the benefits of object reflection to C, em ClassdescMP which dramatically simplifies coding of MPI based parallel programs and em Graphcode a general purpose data parallel programming environment.
Evolution of the density contrast in inhomogeneous dust models ; With the help of families of density contrast indicators, we study the tendency of gravitational systems to become increasingly lumpy with time. Depending upon their domain of definition, these indicators could be local or global. We make a comparative study of these indicators in the context of inhomogeneous cosmological models of LemaitreTolman and Szekeres. In particular, we look at the temporal asymptotic behaviour of these indicators and ask under what conditions, and for which class of models, they evolve monotonically in time. We find that for the case of everexpanding models, there is a larger class of indicators that grow monotonically with time, whereas the corresponding class for the recollapsing models is more restricted. Nevertheless, in the absence of decaying modes, indicators exist which grow monotonically with time for both everexpanding and recollapsing models simultaneously. On the other hand, no such indicators may found which grow monotonically if the decaying modes are allowed to exist. We also find the conditions for these indicators to be nondivergent at the initial singularity in both models. Our results can be of potential relevance for understanding structure formation in inhomogeneous settings and in debates regarding gravitational entropy and arrow of time. In particular, the spatial dependence of turning points in inhomogeneous cosmologies may result in multiple density contrast arrows in recollapsing models over certain epochs. We also find that different notions of asymptotic homogenisation may be deduced, depending upon the density contrast indicators used.
Models of Universe with a Delayed BigBang singularity. III. Solving the horizon problem for an offcenter observer ; This paper is the third of a series dedicated to the study of the Delayed BigBang DBB class of inhomogeneous cosmological models of LemaitreTolmanBondi type. In the first work, it was shown that the geometrical properties of the DBB model are such that the horizon problem can be solved, without need for any inflationary phase, for an observer situated sufficiently near the symmetry center of the model to justify the centered earth'' approximation. In the second work, we studied, in a peculiar subclass of the DBB models, the extent to which the values of the dipole and quadrupole moments measured in the cosmic microwave background radiation CMBR temperature anisotropies can support a cosmological origin. This implies a relation between the location of the observer in the universe and the model parameter value the farther the observer from the symmetry center, the closer our current universe to a local homogeneous pattern. However, in this case, the centered earth approximation is no longer valid and the results of the first work do not apply. We show here that the horizon problem can be solved, in the DBB model, also for an offcenter observer, which improves the consistency of this model regarding the assumption of a CMBR large scale anisotropy cosmological origin.
Anisotropic Cosmological Models with Energy Density Dependent Bulk Viscosity ; An analysis is presented of the Bianchi type I cosmological models with a bulk viscosity when the universe is filled with the stiff fluid p epsilon while the viscosity is a power function of the energy density, such as eta alpha epsilonn. Although the exact solutions are obtainable only when the 2n is an integer, the characteristics of evolution can be clarified for the models with arbitrary value of n. It is shown that, except for the n 0 model that has solutions with infinite energy density at initial state, the anisotropic solutions that evolve to positive Hubble functions in the later stage will begin with Kasnertype curvature singularity and zero energy density at finite past for the n 1 models, and with finite Hubble functions and finite negative energy density at infinite past for the n 1 models. In the course of evolution, matters are created and the anisotropies of the universe are smoothed out. At the final stage, cosmologies are driven to infinite expansion state, de Sitter spacetime, or Friedman universe asymptotically. However, the de Sitter spacetime is the only attractor state for the n 12 models. The solutions that are free of cosmological singularity for any finite proper time are singled out. The extension to the higherdimensional models is also discussed.
Natural Inflation Particle Physics Models, Power Law Spectra for Large Scale Structure, and Constraints from COBE ; A pseudoNambuGoldstone boson, with a potential of the form Vphi Lambda41 pm cosphif, naturally gives rise to inflation if f sim MPl and Lambda sim MGUT. We show how this can arise in technicolorlike and superstring models, and work out an explicit string example in the context of multiple gaugino condensation models. We study the cosmology of this model in detail, and find that sufficient reheating to ensure that baryogenesis can take place requires f 0.3 MPl. The primordial density fluctuation spectrum generated is a nonscaleinvariant power law, Pk propto kns, with ns simeq 1 M2Pl8pi f2, leading to more power on large length scales than the ns 1 HarrisonZeldovich spectrum. The standard CDM model with 0 la ns la 0.60.7 could in principle explain the largescale clustering observed in the APM and IRAS galaxy surveys as well as largescale flows, but the COBE microwave anisotropy implies such low amplitudes or high bias factors, b2 for these CDM models that galaxy formation occurs too late to be viable; combining COBE with sufficiently early galaxy formation or the largescale flows leads to ns 0.6, or f 0.3 MPl as well. For extended and power law inflation models, this constraint is even tighter, ns 0.7; combined with other bounds on large bubbles in extended inflation, this leaves little room for most extended models.