text
stringlengths
62
2.94k
Carbonoxygen ultramassive white dwarfs in general relativity ; We employ the La Plata stellar evolution code, LPCODE, to compute the first set of constant restmass carbonoxygen ultramassive white dwarf evolutionary sequences for masses higher than 1.29 Msun that fully take into account the effects of general relativity on their structural and evolutionary properties. In addition, we employ the LPPUL pulsation code to compute adiabatic gmode Newtonian pulsations on our fully relativistic equilibrium white dwarf models. We find that carbonoxygen white dwarfs more massive than 1.382 Msun become gravitationally unstable with respect to general relativity effects, being this limit higher than the 1.369 Msun we found for oxygenneon white dwarfs. As the stellar mass approaches the limiting mass value, the stellar radius becomes substantially smaller compared with the Newtonian models. Also, the thermomechanical and evolutionary properties of the most massive white dwarfs are strongly affected by general relativity effects. We also provide magnitudes for our cooling sequences in different passbands. Finally, we explore for the first time the pulsational properties of relativistic ultramassive white dwarfs and find that the period spacings and oscillation kinetic energies are strongly affected in the case of most massive white dwarfs. We conclude that the general relativity effects should be taken into account for an accurate assessment of the structural, evolutionary, and pulsational properties of white dwarfs with masses above 1.30 Msun.
Check Me If You Can Detecting ChatGPTGenerated Academic Writing using CheckGPT ; With ChatGPT under the spotlight, utilizing large language models LLMs for academic writing has drawn a significant amount of discussions and concerns in the community. While substantial research efforts have been stimulated for detecting LLMGenerated Content LLMcontent, most of the attempts are still in the early stage of exploration. In this paper, we present a holistic investigation of detecting LLMgenerate academic writing, by providing a dataset, evidence, and algorithms, in order to inspire more community effort to address the concern of LLM academic misuse. We first present GPABenchmark, a benchmarking dataset of 600,000 samples of humanwritten, GPTwritten, GPTcompleted, and GPTpolished abstracts of research papers in CS, physics, and humanities and social sciences HSS. We show that existing opensource and commercial GPT detectors provide unsatisfactory performance on GPABenchmark, especially for GPTpolished text. Moreover, through a user study of 150 participants, we show that it is highly challenging for human users, including experienced faculty members and researchers, to identify GPTgenerated abstracts. We then present CheckGPT, a novel LLMcontent detector consisting of a general representation module and an attentiveBiLSTM classification module, which is accurate, transferable, and interpretable. Experimental results show that CheckGPT achieves an average classification accuracy of 98 to 99 for the taskspecific disciplinespecific detectors and the unified detectors. CheckGPT is also highly transferable that, without tuning, it achieves 90 accuracy in new domains, such as news articles, while a model tuned with approximately 2,000 samples in the target domain achieves 98 accuracy. Finally, we demonstrate the explainability insights obtained from CheckGPT to reveal the key behaviors of how LLM generates texts.
Generalized surface multifractality in 2D disordered systems ; Recently, a concept of generalized multifractality, which characterizes fluctuations and correlations of critical eigenstates, was introduced and explored for all ten symmetry classes of disordered systems. Here, by using the nonlinear sigmamodel field theory, we extend the theory of generalized multifractality to boundaries of systems at criticality. Our numerical simulations on twodimensional 2D systems of symmetry classes A, C, and AII fully confirm the analytical predictions of purescaling observables and Weyl symmetry relations between critical exponents of surface generalized multifractality. This demonstrates validity of the nonlinear sigmamodel field theory for description of Andersonlocalization critical phenomena not only in the bulk but also on the boundary. The critical exponents strongly violate generalized parabolicity, in analogy with earlier results for the bulk, corroborating the conclusion that the considered Andersonlocalization critical points are not described by conformal field theories. We further derive relations between generalized surface multifractal spectra and linear combinations of Lyapunov exponents of a strip in quasionedimensional geometry, which hold under assumption of invariance with respect to a logarithmic conformal map. Our numerics demonstrate that these relations hold with an excellent accuracy. Taken together, our results indicate an intriguing situation the conformal invariance is broken but holds partially at critical points of Anderson localization.
UniG3D A Unified 3D Object Generation Dataset ; The field of generative AI has a transformative impact on various areas, including virtual reality, autonomous driving, the metaverse, gaming, and robotics. Among these applications, 3D object generation techniques are of utmost importance. This technique has unlocked fresh avenues in the realm of creating, customizing, and exploring 3D objects. However, the quality and diversity of existing 3D object generation methods are constrained by the inadequacies of existing 3D object datasets, including issues related to text quality, the incompleteness of multimodal data representation encompassing 2D rendered images and 3D assets, as well as the size of the dataset. In order to resolve these issues, we present UniG3D, a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on Objaverse and ShapeNet datasets. This pipeline converts each raw 3D model into comprehensive multimodal data representation text, image, point cloud, mesh by employing rendering engines and multimodal models. These modules ensure the richness of textual information and the comprehensiveness of data representation. Remarkably, the universality of our pipeline refers to its ability to be applied to any 3D dataset, as it only requires raw 3D data. The selection of data sources for our dataset is based on their scale and quality. Subsequently, we assess the effectiveness of our dataset by employing PointE and SDFusion, two widely recognized methods for object generation, tailored to the prevalent 3D representations of point clouds and signed distance functions. Our dataset is available at httpsunig3d.github.io.
Feasibility of Renewable Energy for Power Generation at the South Pole ; Transitioning from fossilfuel power generation to renewable energy generation and energy storage in remote locations has the potential to reduce both carbon emissions and cost. We present a technoeconomic analysis for implementation of a hybrid renewable energy system at the South Pole in Antarctica, which currently hosts several highenergy physics experiments with nontrivial power needs. A tailored model for the use of solar photovoltaics, wind turbine generators, lithiumion energy storage, and longduration energy storage at this site is explored in different combinations with and without traditional diesel energy generation. We find that the leastcost system includes all three energy generation sources and lithiumion energy storage. For an example steadystate load of 170 kW, this hybrid system reduces diesel consumption by 95 compared to an alldiesel configuration. Over the course of a 15year analysis period the reduced diesel usage leads to a net savings of 57M, with a time to payback of approximately two years. All the scenarios modeled show that the transition to renewables is highly cost effective under the unique economics and constraints of this extremely remote site.
DisCo Disentangled Control for Referring Human Dance Generation in Real World ; Generative AI has made significant strides in computer vision, particularly in imagevideo synthesis conditioned on text descriptions. Despite the advancements, it remains challenging especially in the generation of humancentric content such as dance synthesis. Existing dance synthesis methods struggle with the gap between synthesized content and realworld dance scenarios. In this paper, we define a new problem setting Referring Human Dance Generation, which focuses on realworld dance scenarios with three important properties i Faithfulness the synthesis should retain the appearance of both human subject foreground and background from the reference image, and precisely follow the target pose; ii Generalizability the model should generalize to unseen human subjects, backgrounds, and poses; iii Compositionality it should allow for composition of seenunseen subjects, backgrounds, and poses from different sources. To address these challenges, we introduce a novel approach, DISCO, which includes a novel model architecture with disentangled control to improve the faithfulness and compositionality of dance synthesis, and an effective human attribute pretraining for better generalizability to unseen humans. Extensive qualitative and quantitative results demonstrate that DISCO can generate highquality human dance images and videos with diverse appearances and flexible motions. Code, demo, video and visualization are available at httpsdiscodance.github.io.
Enhancing Job Recommendation through LLMbased Generative Adversarial Networks ; Recommending suitable jobs to users is a critical task in online recruitment platforms, as it can enhance users' satisfaction and the platforms' profitability. While existing job recommendation methods encounter challenges such as the low quality of users' resumes, which hampers their accuracy and practical effectiveness. With the rapid development of large language models LLMs, utilizing the rich external knowledge encapsulated within them, as well as their powerful capabilities of text processing and reasoning, is a promising way to complete users' resumes for more accurate recommendations. However, directly leveraging LLMs to enhance recommendation results is not a onesizefitsall solution, as LLMs may suffer from fabricated generation and fewshot problems, which degrade the quality of resume completion. In this paper, we propose a novel LLMbased approach for job recommendation. To alleviate the limitation of fabricated generation for LLMs, we extract accurate and valuable information beyond users' selfdescription, which helps the LLMs better profile users for resume completion. Specifically, we not only extract users' explicit properties e.g., skills, interests from their selfdescription but also infer users' implicit characteristics from their behaviors for more accurate and meaningful resume completion. Nevertheless, some users still suffer from fewshot problems, which arise due to scarce interaction records, leading to limited guidance for the models in generating highquality resumes. To address this issue, we propose aligning unpaired lowquality with highquality generated resumes by Generative Adversarial Networks GANs, which can refine the resume representations for better recommendation results. Extensive experiments on three large realworld recruitment datasets demonstrate the effectiveness of our proposed method.
NormAUG Normalizationguided Augmentation for Domain Generalization ; Deep learning has made significant advancements in supervised learning. However, models trained in this setting often face challenges due to domain shift between training and test sets, resulting in a significant drop in performance during testing. To address this issue, several domain generalization methods have been developed to learn robust and domaininvariant features from multiple training domains that can generalize well to unseen test domains. Data augmentation plays a crucial role in achieving this goal by enhancing the diversity of the training data. In this paper, inspired by the observation that normalizing an image with different statistics generated by different batches with various domains can perturb its feature, we propose a simple yet effective method called NormAUG Normalizationguided Augmentation. Our method includes two paths the main path and the auxiliary augmented path. During training, the auxiliary path includes multiple subpaths, each corresponding to batch normalization for a single domain or a random combination of multiple domains. This introduces diverse information at the feature level and improves the generalization of the main path. Moreover, our NormAUG method effectively reduces the existing upper boundary for generalization based on theoretical perspectives. During the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance. Extensive experiments are conducted on multiple benchmark datasets to validate the effectiveness of our proposed method.
Controllable Generation of Dialogue Acts for Dialogue Systems via FewShot Response Generation and Ranking ; Dialogue systems need to produce responses that realize multiple types of dialogue acts DAs with high semantic fidelity. In the past, natural language generators NLGs for dialogue were trained on large parallel corpora that map from a domainspecific DA and its semantic attributes to an output utterance. Recent work shows that pretrained language models LLMs offer new possibilities for controllable NLG using promptbased learning. Here we develop a novel fewshot overgenerateandrank approach that achieves the controlled generation of DAs. We compare eight fewshot prompt styles that include a novel method of generating from textual pseudoreferences using a textual style transfer approach. We develop six automatic ranking functions that identify outputs with both the correct DA and high semantic accuracy at generation time. We test our approach on three domains and four LLMs. To our knowledge, this is the first work on NLG for dialogue that automatically ranks outputs using both DA and attribute accuracy. For completeness, we compare our results to finetuned fewshot models trained with 5 to 100 instances per DA. Our results show that several prompt settings achieve perfect DA accuracy, and near perfect semantic accuracy 99.81 and perform better than fewshot finetuning.
Finetuning Multimodal LLMs to Follow Zeroshot Demonstrative Instructions ; Recent advancements in Multimodal Large Language Models MLLMs have been utilizing Visual Prompt Generators VPGs to convert visual features into tokens that LLMs can recognize. This is achieved by training the VPGs on millions of imagecaption pairs, where the VPGgenerated tokens of images are fed into a frozen LLM to generate the corresponding captions. However, this imagecaptioning based training objective inherently biases the VPG to concentrate solely on the primary visual contents sufficient for caption generation, often neglecting other visual details. This shortcoming results in MLLMs' underperformance in comprehending demonstrative instructions consisting of multiple, interleaved, and multimodal instructions that demonstrate the required context to complete a task. To address this issue, we introduce a generic and lightweight Visual Prompt Generator Complete module VPGC, which can infer and complete the missing details essential for comprehending demonstrative instructions. Further, we propose a synthetic discriminative training strategy to finetune VPGC, eliminating the need for supervised demonstrative instructions. As for evaluation, we build DEMON, a comprehensive benchmark for demonstrative instruction understanding. Synthetically trained with the proposed strategy, VPGC achieves significantly stronger zeroshot performance across all tasks of DEMON. Further evaluation on the MME and OwlEval benchmarks also demonstrate the superiority of VPGC. Our benchmark, code, and pretrained models are available at httpsgithub.comDCDmllmCheetah.
Pareto Invariant Representation Learning for Multimedia Recommendation ; Multimedia recommendation involves personalized ranking tasks, where multimedia content is usually represented using a generic encoder. However, these generic representations introduce spurious correlations that fail to reveal users' true preferences. Existing works attempt to alleviate this problem by learning invariant representations, but overlook the balance between independent and identically distributed IID and outofdistribution OOD generalization. In this paper, we propose a framework called Pareto Invariant Representation Learning PaInvRL to mitigate the impact of spurious correlations from an IIDOOD multiobjective optimization perspective, by learning invariant representations intrinsic factors that attract user attention and variant representations other factors simultaneously. Specifically, PaInvRL includes three iteratively executed modules i heterogeneous identification module, which identifies the heterogeneous environments to reflect distributional shifts for useritem interactions; ii invariant mask generation module, which learns invariant masks based on the Paretooptimal solutions that minimize the adaptive weighted Invariant Risk Minimization IRM and Empirical Risk ERM losses; iii convert module, which generates both variant representations and iteminvariant representations for training a multimodal recommendation model that mitigates spurious correlations and balances the generalization performance within and cross the environmental distributions. We compare the proposed PaInvRL with stateoftheart recommendation models on three public multimedia recommendation datasets Movielens, Tiktok, and Kwai, and the experimental results validate the effectiveness of PaInvRL for both within and crossenvironmental learning.
A Survey on Deep Multimodal Learning for Body Language Recognition and Generation ; Body language BL refers to the nonverbal communication expressed through physical movements, gestures, facial expressions, and postures. It is a form of communication that conveys information, emotions, attitudes, and intentions without the use of spoken or written words. It plays a crucial role in interpersonal interactions and can complement or even override verbal communication. Deep multimodal learning techniques have shown promise in understanding and analyzing these diverse aspects of BL. The survey emphasizes their applications to BL generation and recognition. Several common BLs are considered i.e., Sign Language SL, Cued Speech CS, Cospeech CoS, and Talking Head TH, and we have conducted an analysis and established the connections among these four BL for the first time. Their generation and recognition often involve multimodal approaches. Benchmark datasets for BL research are well collected and organized, along with the evaluation of SOTA methods on these datasets. The survey highlights challenges such as limited labeled data, multimodal learning, and the need for domain adaptation to generalize models to unseen speakers or languages. Future research directions are presented, including exploring selfsupervised learning techniques, integrating contextual information from other modalities, and exploiting largescale pretrained multimodal models. In summary, this survey paper provides a comprehensive understanding of deep multimodal learning for various BL generations and recognitions for the first time. By analyzing advancements, challenges, and future directions, it serves as a valuable resource for researchers and practitioners in advancing this field. n addition, we maintain a continuously updated paper list for deep multimodal learning for BL recognition and generation httpsgithub.comwentaoL86awesomebodylanguage.
HoloFusion Towards Photorealistic 3D Generative Modeling ; Diffusionbased image generators can now produce highquality and diverse samples, but their success has yet to fully translate to 3D generation existing diffusion methods can either generate lowresolution but 3D consistent outputs, or detailed 2D views of 3D objects but with potential structural defects and lacking view consistency or realism. We present HoloFusion, a method that combines the best of these approaches to produce highfidelity, plausible, and diverse 3D samples while learning from a collection of multiview 2D images only. The method first generates coarse 3D samples using a variant of the recently proposed HoloDiffusion generator. Then, it independently renders and upsamples a large number of views of the coarse 3D model, superresolves them to add detail, and distills those into a single, highfidelity implicit 3D representation, which also ensures view consistency of the final renders. The superresolution network is trained as an integral part of HoloFusion, endtoend, and the final distillation uses a new sampling scheme to capture the space of superresolved signals. We compare our method against existing baselines, including DreamFusion, Get3D, EG3D, and HoloDiffusion, and achieve, to the best of our knowledge, the most realistic results on the challenging CO3Dv2 dataset.
SayNav Grounding Large Language Models for Dynamic Planning to Navigation in New Environments ; Semantic reasoning and dynamic planning capabilities are crucial for an autonomous agent to perform complex navigation tasks in unknown environments. It requires a large amount of commonsense knowledge, that humans possess, to succeed in these tasks. We present SayNav, a new approach that leverages human knowledge from Large Language Models LLMs for efficient generalization to complex navigation tasks in unknown largescale environments. SayNav uses a novel grounding mechanism, that incrementally builds a 3D scene graph of the explored environment as inputs to LLMs, for generating feasible and contextually appropriate highlevel plans for navigation. The LLMgenerated plan is then executed by a pretrained lowlevel planner, that treats each planned step as a shortdistance pointgoal navigation subtask. SayNav dynamically generates stepbystep instructions during navigation and continuously refines future steps based on newly perceived information. We evaluate SayNav on a new multiobject navigation task, that requires the agent to utilize a massive amount of human knowledge to efficiently search multiple different objects in an unknown environment. SayNav outperforms an oracle based Pointnav baseline, achieving a success rate of 95.35 vs 56.06 for the baseline, under the ideal settings on this task, highlighting its ability to generate dynamic plans for successfully locating objects in largescale new environments. In addition, SayNav also enables efficient generalization of learning to navigate from simulation to real novel environments.
Adaptive Inputimage Normalization for Solving Mode Collapse Problem in GANbased Xray Images ; Biomedical image datasets can be imbalanced due to the rarity of targeted diseases. Generative Adversarial Networks play a key role in addressing this imbalance by enabling the generation of synthetic images to augment datasets. It is important to generate synthetic images that incorporate a diverse range of features to accurately represent the distribution of features present in the training imagery. Furthermore, the absence of diverse features in synthetic images can degrade the performance of machine learning classifiers. The mode collapse problem impacts Generative Adversarial Networks' capacity to generate diversified images. Mode collapse comes in two varieties intraclass and interclass. In this paper, both varieties of the mode collapse problem are investigated, and their subsequent impact on the diversity of synthetic Xray images is evaluated. This work contributes an empirical demonstration of the benefits of integrating the adaptive inputimage normalization with the Deep Convolutional GAN and Auxiliary Classifier GAN to alleviate the mode collapse problems. Synthetically generated images are utilized for data augmentation and training a Vision Transformer model. The classification performance of the model is evaluated using accuracy, recall, and precision scores. Results demonstrate that the DCGAN and the ACGAN with adaptive inputimage normalization outperform the DCGAN and ACGAN with unnormalized Xray images as evidenced by the superior diversity scores and classification scores.
Constraints First A New MDDbased Model to Generate Sentences Under Constraints ; This paper introduces a new approach to generating strongly constrained texts. We consider standardized sentence generation for the typical application of vision screening. To solve this problem, we formalize it as a discrete combinatorial optimization problem and utilize multivalued decision diagrams MDD, a wellknown data structure to deal with constraints. In our context, one key strength of MDD is to compute an exhaustive set of solutions without performing any search. Once the sentences are obtained, we apply a language model GPT2 to keep the best ones. We detail this for English and also for French where the agreement and conjugation rules are known to be more complex. Finally, with the help of GPT2, we get hundreds of bonafide candidate sentences. When compared with the few dozen sentences usually available in the wellknown vision screening test MNREAD, this brings a major breakthrough in the field of standardized sentence generation. Also, as it can be easily adapted for other languages, it has the potential to make the MNREAD test even more valuable and usable. More generally, this paper highlights MDD as a convincing alternative for constrained text generation, especially when the constraints are hard to satisfy, but also for many other prospects.
Balance Laws as Test of Gravitational Waveforms ; Gravitational waveforms play a crucial role in comparing observed signals to theoretical predictions. However, obtaining accurate analytical waveforms directly from general relativity remains challenging. Existing methods involve a complex blend of postNewtonian theory, effectiveonebody formalism, numerical relativity, and interpolation, introducing systematic errors. As gravitational wave astronomy advances with new detectors, these errors gain significance, particularly when testing general relativity in the nonlinear regime. A recent development proposes a novel approach to address this issue. By deriving precise constraints or balance laws directly from full nonlinear GR, this method offers a means to evaluate waveform quality, detect template weaknesses, and ensure internal consistency. Before delving into the intricacies of balance laws in full nonlinear general relativity, we illustrate the concept using a detailed mechanical analogy. We'll examine a dissipative mechanical system as an example, demonstrating how mechanical balance laws can gauge the accuracy of approximate solutions in capturing the complete physical scenario. While mechanical balance laws are straightforward, deriving balance laws in electromagnetism and general relativity demands a rigorous foundation rooted in mathematically precise concepts of radiation. Following the analogy with electromagnetism, we derive balance laws in general relativity. As a proof of concept, we employ an analytical approximate waveform model, showcasing how these balance laws serve as a litmus test for the model's validity.
Generalized Schrodinger Bridge Matching ; Modern distribution matching algorithms for training diffusion or flow models directly prescribe the time evolution of the marginal distributions between two boundary distributions. In this work, we consider a generalized distribution matching setup, where these marginals are only implicitly described as a solution to some taskspecific objective function. The problem setup, known as the Generalized Schrodinger Bridge GSB, appears prevalently in many scientific areas both within and without machine learning. We propose Generalized Schrodinger Bridge Matching GSBM, a new matching algorithm inspired by recent advances, generalizing them beyond kinetic energy minimization and to account for taskspecific state costs. We show that such a generalization can be cast as solving conditional stochastic optimal control, for which efficient variational approximations can be used, and further debiased with the aid of path integral theory. Compared to prior methods for solving GSB problems, our GSBM algorithm always preserves a feasible transport map between the boundary distributions throughout training, thereby enabling stable convergence and significantly improved scalability. We empirically validate our claims on an extensive suite of experimental setups, including crowd navigation, opinion depolarization, LiDAR manifolds, and image domain transfer. Our work brings new algorithmic opportunities for training diffusion models enhanced with taskspecific optimality structures.
Synthetic Spectra of Hydrodynamic Models of Type Ia Supernovae ; We present detailed NLTE synthetic spectra of hydrodynamic SNe Ia models. We make no assumptions about the form of the spectrum at the inner boundary. We calculate both Chandrasekharmass deflagration models and subChandrasekhar helium detonators.'' Gammaray deposition is handled in a simple, accurate manner. We have parameterized the storage of energy that arises from the time dependent deposition of radioactive decay energy in a reasonable manner, that spans the expected range. We find that the Chandrasekharmass deflagration model W7 of Nomoto etal shows good agreement with the observed spectra of SN 1992A and SN 1994D, particularly in the UV, where our models are expected to be most accurate. The subChandrasekhar models do not reproduce the UV deficit observed in normal SNe Ia. They do bear some resemblance to subluminous SNe Ia, but the shape of the spectra i.e. the colors are opposite to that of the observed ones and the intermediate mass element lines such as Si II, and Ca II are extremely weak, which seems to be a generic difficulty of the models. Although the subChandrasekhar models have a significant helium abundance unlike Chandrasekharmass models, helium lines are not prominent in the spectra near maximum light and thus do not act as a spectral signature for the progenitor.
Twocomponent galaxy models phasespace constraints ; The properties of the analitycal phasespace distribution function of twocomponent spherical selfconsistent galaxy models, where one density distribution follows the Hernquist profile, and the other a gamma0 model, with different total masses and core radii H0 models, presented in Ciotti 1998, are here summarized. A variable amount of radial OsipkovMerritt orbital anisotropy is allowed in both components. The necessary and sufficient conditions that the model parameters must satisfy in order to correspond to a model where each one of the two distinct components has a positive DF the socalled model consistency are analytically derived, together with some results on the more general problem of the consistency of twocomponent gamma1gamma2 models. The possibility to add in a consistent way a black hole at the center of radially anisotropic gamma models is also discussed. In particular, it is proved that a globally isotropic Hernquist component is consistent for any mass and core radius of the superimposed gamma0 halo; on the contrary, only a maximum value of the core radius is allowed to the gamma0 component when a Hernquist halo is added. The combined effect of halo concentration and orbital anisotropy is successively investigated.
Disks with Jet, ADAF, or EDAF for Sgr A ; We investigate various models of accretion disks for Sgr A, one of the most puzzling sources in the Galaxy. The generic image we have taken into account consists of a black hole, an accretion disk, and a jet. Various accretion models are able to explain the low NIR flux of Sgr A a standard accretion disk with a jet, an ADAF, or an EDAF Ejection Dominated Accretion Flow model. We find that all of these models are conceptually similar. The accretion model which allows the formation of the jet at the innermost edge of the disk requires a subkeplerian gas motion and a very large base of the jet. The large base of the jet may be unrealistic for Sgr A, since the jet model and the observations suggest that the jet is collimated and anchored in the disk in a very narrow region of the disk close to the black hole. Alternatively, one can think of a jet plus wind model EDAF, where most of the energy goes out without being dissipated in the disk. The model resembles the ADAF model at small radii. At large radii the energy is ejected by a wind.
Numerical Models of Binary Neutron Star System Mergers. I. Numerical Methods and Equilibrium Data for Newtonian Models ; The numerical modeling of binary neutron star mergers has become a subject of much interest in recent years. While a full and accurate model of this phenomenon would require the evolution of the equations of relativistic hydrodynamics along with the Einstein field equations, a qualitative study of the early stages on inspiral can be accomplished by either Newtonian or postNewtonian models, which are more tractable. In this paper we offer a comparison of results from both rotating and nonrotating inertial frame Newtonian calculations. We find that the rotating frame calculations offer significantly improved accuracy as compared with the inertial frame models. Furthermore, we show that inertial frame models exhibit significant and erroneous angular momentum loss during the simulations that leads to an unphysical inspiral of the two neutron stars. We also examine the dependence of the models on initial conditions by considering initial configurations that consist of spherical neutron stars as well as stars that are in equilibrium and which are tidally distorted. We compare our models those of Rasio Shapiro 1992,1994a and New Tohline 1997. Finally, we investigate the use of the isolated star approximation for the construction of initial data.
Axisymmetric, 3Integral Models of Galaxies A Massive Black Hole in NGC3379 ; We fit axisymmetric 3integral dynamical models to NGC3379 using the lineofsight velocity distribution obtained from HSTFOS spectra of the galaxy center and groundbased longslit spectroscopy along four position angles, with the light distribution constrained by WFPC2 and groundbased images. We have fitted models with inclinations from 29 intrinsic galaxy type E5 to 90 degrees intrinsic E1 and black hole masses from 0 to 1e9 Msolar. The bestfit black hole masses range from 6e7 to 2e8 Msolar, depending on inclination. The velocity ellipsoid of the best model is not consistent with either isotropy or a twointegral distribution function. Along the major axis, the velocity ellipsoid becomes tangential at the innermost bin, radial in the midrange radii, and tangential again at the outermost bins. For the acceptable models, the radial to tangential dispersion in the midrange radii ranges from 1.1 sigmar sigmat 1.7. Compared with these 3integral models, 2integral isotropic models overestimate the black hole mass since they cannot provide adequate radial motion. However, the models presented in this paper still contain restrictive assumptionsnamely assumptions of constant ML and spheroidal symmetryrequiring yet more models to study black hole properties in complete generality.
Rotational modes of nonisentropic stars and the gravitational radiation driven instability ; We investigate the properties of rmode and inertial mode of slowly rotating, nonisentropic, Newtonian stars, by taking account of the effects of the Coriolis force and the centrifugal force. For the nonisentropic models we consider only two cases, that is, the models with the stable fluid stratification in the whole interior and the models that are fully convective. For simplicity we call these two kinds of models radiative and convective models in this paper. For both cases, we assume the deviation of the models from isentropic structure is small. Examining the dissipation timescales due to the gravitational radiation and several viscous processes for the polytropic neutron star models, we find that the gravitational radiation driven instability of the rmodes remains strong even in the nonisentropic models. Calculating the rotational modes of the radiative models as functions of the angular rotation frequency Omega, we find that the inertial modes are strongly modified by the buoyant force at small Omega, where the buoyant force as a dominant restoring force becomes comparable with or stronger than the Coriolis force. Because of this property we obtain the mode sequences in which the inertial modes at large Omega are identified as gmodes or the rmodes with lm at small Omega. We also note that as Omega increases from Omega0 the retrograde gmodes become retrograde inertial modes, which are unstable against the gravitational radiation reaction.
Constraints on Galaxy Density Profiles from Strong Gravitational Lensing The Case of B 1933503 ; We consider a wide range of parametric mass models for B 1933503, a tenimage radio lens, and identify shared properties of the models with the best fits. The approximate rotation curves varies by less than 8.5 from the average value between the innermost and the outermost image 1.5h1 kpc to 4.1h1 kpc for models within 1 sigma of the best fit, and the radial dependence of the shear strength and angle also have common behavior for the best models. The time delay between images 1 and 6, the longest delay between the radio cores, is Delta t 10.62.41.1h1 days Omega00.3, lambda00.7 including all the modeling uncertainties. Deeper infrared observations, to more precisely register the lens galaxy with the radio images and to measure the properties of the Einstein ring image of the radio source's host galaxy, would significantly improve the model constraints and further reduce the uncertainties in the mass distribution and time delay.
Testing Comptonisation models using BeppoSAX observations of Seyfert 1 galaxies ; We have used realistic Comptonization models to fit high quality BeppoSAX data of 6 Seyfert galaxies. Our main effort was to adopt a Comptonization model taking into account the anisotropy of the soft photon field. The most important consequence is a reduction of the first scattering order, which produces a break the socalled anisotropy break in the outgoing spectra. The physical parameters of the hot corona i.e. the temperature and optical depth obtained fitting this class of models to broad band Xray spectra are substantially different from those derived fitting the same data with the power law cutoff model commonly used in the literature. In particular, our best fits with Comptonization models in slab geometry give a temperature generally much larger and an optical depth much smaller than derived from the power law cutoff fits, using standard Comptonization formulae. The estimate of the reflection normalization is also larger with the slab corona model. For most objects of our sample, both models give Compton parameter values larger than expected in a slab corona geometry, suggesting a more photon starved'' Xray source configuration. Finally, the two models provide different trends and correlation between the physical parameters which has major consequences for the physical interpretation of the data.
Gamma Ray Bursts and Cosmic Ray Origin ; This paper presents the theoretical basis of the fireballblast wave model, and some implications of recent results on GRB source models and cosmicray production from GRBs. BATSE observations of the prompt gammaray luminous phase, and BeppoSAX and long wavelength afterglow observations of GRBs are briefly summarized. Derivation of spectral and temporal indices of an adiabatic blast wave decelerating in a uniform surrounding medium in the limiting case of a nonrelativistic reverse shock, both for spherical and collimated outflows, is presented as an example of the general theory. External shock model fits for the afterglow lead to the conclusion that GRB outflows are jetted. The external shock model also explains the temporal duration distribution and clustering of peak energies in prompt spectra of longduration GRBs, from which the redshift dependence of the GRB source rate density can be derived. Source models are reviewed in light of the constant energy reservoir result of Frail et al. that implies a total GRB energy of a few times 1051 ergs and an average beaming fraction of 1500 of full sky. Paczynski's isotropic hypernova model is ruled out. The VietriStella model twostep collapse process is preferred over a hypernovacollapsar model in view of the Xray observations of GRBs and the constant energy reservoir result. Secondorder processes in GRB blast waves can accelerate particles to ultrahigh energies. GRBs may be the sources of UHECRs and cosmic rays with energies above the knee of the cosmic ray spectrum. Highenergy neutrino and gammaray observations with GLAST and groundbased gammaray telescopes will be crucial to test GRB source models.
Kinematics of Diffuse Ionized Gas Halos A Ballistic Model of Halo Rotation ; To better understand diffuse ionized gas kinematics and halo rotation in spiral galaxies, we have developed a model in which clouds are ejected from the disk and follow ballistic trajectories through the halo. The behavior of clouds in this model has been investigated thoroughly through a parameter space search and a study of individual cloud orbits. Synthetic velocity profiles have been generated in z height above the plane from the models for the purpose of comparing with velocity centroid data from previously obtained longslit spectra of the edgeon spirals NGC 891 one slit and NGC 5775 two slits. In each case, a purely ballistic model is insufficient in explaining observed DIG kinematics. In the case of NGC 891, the observed vertical velocity gradient is not as steep as predicted by the model, possibly suggesting a source of coupling between disk and halo rotation or an outwardly directed pressure gradient. The ballistic model more successfully explains DIG kinematics observed in NGC 5775; however, it cannot explain the observed trend of highz gas velocities nearly reaching the systemic velocity. Such behavior can be attributed to either an inwardly directed pressure gradient or a possible tidal interaction with its companion, NGC 5774. In addition, the ballistic model predicts that clouds move radially outward as they cycle through the halo. The mass and energy fluxes estimated from the model suggest this radially outward gas migration leads to a redistribution of material that may significantly affect the evolution of the ISM.
An Assessment of Dynamical Mass Constraints on PreMain Sequence Evolutionary Tracks ; abridged We have assembled a database of stars having both masses determined from measured orbital dynamics and sufficient spectral and photometric information for their placement on a theoretical HR diagram. Our sample consists of 115 low mass M 2.0 Msun stars, 27 premain sequence and 88 main sequence. We use a variety of available premain sequence evolutionary calculations to test the consistency of predicted stellar masses with dynamically determined masses. Despite substantial improvements in model physics over the past decade, large systematic discrepancies still exist between empirical and theoretically derived masses. For mainsequence stars, all models considered predict masses consistent with dynamical values above 1.2 Msun, some models predict consistent masses at solar or slightly lower masses, and no models predict consistent masses below 0.5 Msun but rather all models systematically underpredict such low masses by 520. The failure at low masses stems from the poor match of most models to the empirical mainsequence below temperatures of 3800 K where molecules become the dominant source of opacity and convection is the dominant mode of energy transport. For the premain sequence sample we find similar trends. There is generally good agreement between predicted and dynamical masses above 1.2 Msun for all models. Below 1.2 Msun and down to 0.3 Msun the lowest mass testable most evolutionary models systematically underpredict the dynamically determined masses by 1030 on average with the Lyon group models e.g. Baraffe et al. 1998 predicting marginally consistent masses in the mean though with large scatter.
A closure model with plumes I. The solar convection ; Oscillations of stellar p modes, excited by turbulent convection, are investigated. We take into account the asymmetry of the up and downflows created by turbulent plumes through an adapted closure model. In a companion paper, we apply it to the formalism of excitation of solar p modes developed by Samadi Goupil 2001. Using results from 3D numerical simulations of the upper most part of the solar convection zone, we show that the twoscalemassflux model TFM is valid only for quasilaminar or highly skewed flows Gryanik Hartmann 2002. We build a generalizedTwoscaleMassFlux Model GTFM model which takes into account both the skew introduced by the presence of two flows and the effects of turbulence in each flow. In order to apply the GTFM to the solar case, we introduce the plume dynamics as modelled by Rieutord Zahn 1995 and construct a Closure Model with Plumes CMP. When comparing with 3D simulation results, the CMP improves the agreement for the fourth order moments, by approximatively a factor of two compared with the use of the quasinormal approximation or a skewness computed with the classical TFM. The asymmetry of turbulent convection in the solar case has an important impact on the verticalvelocity fourthorder moment which has to be accounted for by models. The CMP is a significant improvement and is expected to improve the modelling of solar pmode excitation.
An Empirical Comparison of Probability Models for Dependency Grammar ; This technical report is an appendix to Eisner 1996 it gives superior experimental results that were reported only in the talk version of that paper. Eisner 1996 trained three probability models on a small set of about 4,000 conjunctionfree, dependencygrammar parses derived from the Wall Street Journal section of the Penn Treebank, and then evaluated the models on a heldout test set, using a novel On3 parsing algorithm. The present paper describes some details of the experiments and repeats them with a larger training set of 25,000 sentences. As reported at the talk, the more extensive training yields greatly improved performance. Nearly half the sentences are parsed with no misattachments; twothirds are parsed with at most one misattachment. Of the models described in the original written paper, the best score is still obtained with the generative topdown model C. However, slightly better models are also explored, in particular, two variants on the comprehension bottomup model B. The better of these has an attachment accuracy of 90, and unlike model C tags words more accurately than the comparable trigram tagger. Differences are statistically significant. If tags are roughly known in advance, search error is all but eliminated and the new model attains an attachment accuracy of 93. We find that the parser of Collins 1996, when combined with a highlytrained tagger, also achieves 93 when trained and tested on the same sentences. Similarities and differences are discussed.
Phase transitions in a frustrated XY model with zigzag couplings ; We study a new generalized version of the squarelattice frustrated XY model where unequal ferromagnetic and antiferromagnetic couplings are arranged in a zigzag pattern. The ratio between the couplings rho can be used to tune the system, continuously, from the isotropic squarelattice to the triangularlattice frustrated XY model. The model can be physically realized as a Josephsonjunction array with two different couplings, in a magnetic field corresponding to halfflux quanta per plaquette. Meanfield approximation, GinzburgLandau expansion and finitesize scaling of Monte Carlo simulations are used to study the phase diagram and critical behavior. Depending on the value of rho, two separate transitions or a transition line in the universality class of the XYIsing model, with combined Z2 and U1 symmetries, takes place. In particular, the phase transitions of the standard squarelattice and triangularlattice frustrated XY models correspond to two different cuts through the same transition line. Estimates of the chiral Z2 critical exponents on this transition line deviate significantly from the pure Ising values, consistent with that along the critical line of the XYIsing model. This suggests that a frustrated XY model or Josephsonjunction array with a zigzag coupling modulation can provide a physical realization of the XYIsing model critical line.
Attractive forces between circular polyions of the same charge ; We study two models of ringlike polyions which are twodimensional versions of simple models for colloidal particles model A and for rodlike segments of DNA model B, both in solution with counterions. The counterions may condensate on Z sites of the polyions, and we suppose the number of condensed counterions on each polyion n to be fixed. The exact free energy of a pair of polyions is calculated for not too large values of Z, and for both models we find that attractive forces appear between the rings even when the condensed counterions do not neutralize the total charge of the polyions. This force is due to correlations between the condensed counterions and in general becomes smaller as the temperature is increased. For model A a divergent force may appear as the separation between the rings vanishes, and this makes an analytical study possible for this model and for vanishing separation, showing a universal behavior in this limit. Attractive forces are found for model A if the valence of the counterions is larger than one. For model B, no such divergences are present, and attractive forces are found for a finite range of values of the couterion valence, which depends of Z, n, and the temperature.
A Stochastic Evolutionary Model Exhibiting PowerLaw Behaviour with an Exponential Cutoff ; Recently several authors have proposed stochastic evolutionary models for the growth of complex networks that give rise to powerlaw distributions. These models are based on the notion of preferential attachment leading to the rich get richer'' phenomenon. Despite the generality of the proposed stochastic models, there are still some unexplained phenomena, which may arise due to the limited size of networks such as protein and email networks. Such networks may in fact exhibit an exponential cutoff in the powerlaw scaling, although this cutoff may only be observable in the tail of the distribution for extremely large networks. We propose a modification of the basic stochastic evolutionary model, so that after a node is chosen preferentially, say according to the number of its inlinks, there is a small probability that this node will be discarded. We show that as a result of this modification, by viewing the stochastic process in terms of an urn transfer model, we obtain a powerlaw distribution with an exponential cutoff. Unlike many other models, the current model can capture instances where the exponent of the distribution is less than or equal to two. As a proof of concept, we demonstrate the consistency of our model by analysing a yeast protein interaction network, the distribution of which is known to follow a power law with an exponential cutoff.
Glassy dynamics of kinetically constrained models ; We review the use of kinetically constrained models KCMs for the study of dynamics in glassy systems. The characteristic feature of KCMs is that they have trivial, often noninteracting, equilibrium behaviour but interesting slow dynamics due to restrictions on the allowed transitions between configurations. The basic question which KCMs ask is therefore how much glassy physics can be understood without an underlying equilibrium glass transition''. After a brief review of glassy phenomenology, we describe the main model classes, which include spinfacilitated Ising models, constrained lattice gases, models inspired by cellular structures such as soap froths, models obtained via mappings from interacting systems without constraints, and finally related models such as urn, oscillator, tiling and needle models. We then describe the broad range of techniques that have been applied to KCMs, including exact solutions, adiabatic approximations, projection and modecoupling techniques, diagrammatic approaches and mappings to quantum systems or effective models. Finally, we give a survey of the known results for the dynamics of KCMs both in and out of equilibrium, including topics such as relaxation time divergences and dynamical transitions, nonlinear relaxation, aging and effective temperatures, cooperativity and dynamical heterogeneities, and finally nonequilibrium stationary states generated by external driving. We conclude with a discussion of open questions and possibilities for future work.
On Sexual contacts and epidemic thresholds, models and inference for Sexual partnership distributions ; Recent work has focused attention on statistical inference for the population distribution of the number of sexual partners based on survey data. The characteristics of these distributions are of interest as components of mathematical models for the transmission dynamics of sexuallytransmitted diseases STDs. Such information can be used both to calibrate theoretical models, to make predictions for real populations, and as a tool for guiding public health policy. Our previous work on this subject has developed likelihoodbased statistical methods for inference that allow for lowdimensional, semiparametric models. Inference has been based on several proposed stochastic process models for the formation of sexual partnership networks. We have also developed model selection criteria to choose between competing models, and assessed the fit of different models to three populations Uganda, Sweden, and the USA. Throughout this work, we have emphasized the correct assessment of the uncertainty of the estimates based on the data analyzed. We have also widened the question of interest to the limitations of inferences from such data, and the utility of degreebased epidemiological models more generally. In this paper we address further statistical issues that are important in this area, and a number of confusions that have arisen in interpreting our work. In particular, we consider the use of cumulative lifetime partner distributions, heaping and other issues raised by Liljeros et al. in a recent working paper.
Fitting Effective Diffusion Models to Data Associated with a Glassy Potential Estimation, Classical Inference Procedures and Some Heuristics ; A variety of researchers have successfully obtained the parameters of low dimensional diffusion models using the data that comes out of atomistic simulations. This naturally raises a variety of questions about efficient estimation, goodnessoffit tests, and confidence interval estimation. The first part of this article uses maximum likelihood estimation to obtain the parameters of a diffusion model from a scalar time series. I address numerical issues associated with attempting to realize asymptotic statistics results with moderate sample sizes in the presence of exact and approximated transition densities. Approximate transition densities are used because the analytic solution of a transition density associated with a parametric diffusion model is often unknown.I am primarily interested in how well the deterministic transition density expansions of AitSahalia capture the curvature of the transition density in idealized situations that occur when one carries out simulations in the presence of a glassy interaction potential. Accurate approximation of the curvature of the transition density is desirable because it can be used to quantify the goodnessoffit of the model and to calculate asymptotic confidence intervals of the estimated parameters. The second part of this paper contributes a heuristic estimation technique for approximating a nonlinear diffusion model. A global nonlinear model is obtained by taking a batch of time series and applying simple local models to portions of the data. I demonstrate the technique on a diffusion model with a known transition density and on data generated by the Stochastic Simulation Algorithm.
Minimum Model Semantics for Logic Programs with NegationasFailure ; We give a purely modeltheoretic characterization of the semantics of logic programs with negationasfailure allowed in clause bodies. In our semantics the meaning of a program is, as in the classical case, the unique minimum model in a programindependent ordering. We use an expanded truth domain that has an uncountable linearly ordered set of truth values between False the minimum element and True the maximum, with a Zero element in the middle. The truth values below Zero are ordered like the countable ordinals. The values above Zero have exactly the reverse order. Negation is interpreted as reflection about Zero followed by a step towards Zero; the only truth value that remains unaffected by negation is Zero. We show that every program has a unique minimum model MP, and that this model can be constructed with a TP iteration which proceeds through the countable ordinals. Furthermore, we demonstrate that MP can also be obtained through a model intersection construction which generalizes the wellknown model intersection theorem for classical logic programming. Finally, we show that by collapsing the true and false values of the infinitevalued model MP to the classical True and False, we obtain a threevalued model identical to the wellfounded one.
A dynamical systems approach to the tilted Bianchi models of solvable type ; We use a dynamical systems approach to analyse the tilting spatially homogeneous Bianchi models of solvable type e.g., types VIh and VIIh with a perfect fluid and a linear barotropic gammalaw equation of state. In particular, we study the latetime behaviour of tilted Bianchi models, with an emphasis on the existence of equilibrium points and their stability properties. We briefly discuss the tilting Bianchi type V models and the latetime asymptotic behaviour of irrotational Bianchi VII0 models. We prove the important result that for noninflationary Bianchi type VIIh models vacuum planewave solutions are the only future attracting equilibrium points in the Bianchi type VIIh invariant set. We then investigate the dynamics close to the planewave solutions in more detail, and discover some new features that arise in the dynamical behaviour of Bianchi cosmologies with the inclusion of tilt. We point out that in a tiny open set of parameter space in the type IV model the loophole there exists closed curves which act as attracting limit cycles. More interestingly, in the Bianchi type VIIh models there is a bifurcation in which a set of equilibrium points turn into closed orbits. There is a region in which both sets of closed curves coexist, and it appears that for the type VIIh models in this region the solution curves approach a compact surface which is topologically a torus.
BottomUp Approach to Uniefied Supergravity Models ; A new approach is proposed to phenomenological study of a generic unified supergravity model, which reduces to the minimal supersymmetric standard model. The model is effectively parametrized in terms of five low energy observables. In consequence, it is easy to investigate systematically the parameter space of the model, allowed by the requirement of radiative electroweak symmetry breaking and by the present experimental limits. Radiative corrections due to large Yukawa couplings and particlesparticle mass splitting are included into the analysis and found to have important effects, in particular on the degree of fine tuning in the model. In this framework there are presented the predictions of the model for various low energy physical observables and their dependence on the values of the top quark mass and tanbeta is discussed. Results are also given for the large tanbeta scenario, tanbetaapprox mtmb. Our approach can be easily extended to nonminimal supergravity models, which do not assume the universality of the soft breaking parameters at the unification scale MX. Such an extension will be particularly useful once the masses of some sparticles are known, allowing for a model independent study of the parameter space at MX.
Degenerate BESS Model The possibility of a low energy strong electroweak sector ; We discuss possible symmetries of effective theories describing spinless and spin 1 bosons, mainly to concentrate on an intriguing phenomenological possibility that of a hardly noticeable strong electroweak sector at relatively low energies. Specifically, a model with both vector and axial vector strong interacting bosons may possess a discrete symmetry imposing degeneracy of the two sets of bosons degenerate BESS model. In such a case its effects at low energies become almost invisible and the model easily passes all low energy precision tests. The reason lies essentially in the fact that the model automatically satisfies decoupling, contrary to models with only vectors. For large mass of the degenerate spin one bosons the model becomes identical at the classical level to the standard model taken in the limit of infinite Higgs mass. For these reasons we have thought it worthwhile to fully develop the model, together with its possible generalizations, and to study the expected phenomenology. For instance, just because of its invisibility at low energy, it is conceivable that degenerate BESS has low mass spin one states and gives quite visible signals at existing or forthcoming accelerators.
Model for Particle Masses, Flavor Mixing, and CP Violation Based on Spontaneously Broken Discrete Chiral Symmetry as the Origin of Families ; We construct extensions of the standard model based on the hypothesis that the Higgs bosons also exhibit a family structure, and that the flavor weak eigenstates in the three families are distinguished by a discrete Z6 chiral symmetry that is spontaneously broken by the Higgs sector. We study in detail at the tree level models with three Higgs doublets, and with six Higgs doublets comprising two weakly coupled sets of three. In a leading approximation of S3 cyclic permutation symmetry the three Higgs model gives a democratic'' mass matrix of rank one, while the six Higgs model gives either a rank one mass matrix, or in the case when it spontaneously violates CP, a rank two mass matrix corresponding to nonzero second family masses. In both models, the CKM matrix is exactly unity in leading approximation. Allowing small explicit violations of cyclic permutation symmetry generates small first family masses in the six Higgs model, and first and second family masses in the three Higgs model, and gives a nontrivial CKM matrix in which the mixings of the first and second family quarks are naturally larger than mixings involving the third family. Complete numerical fits are given for both models, flavor changing neutral current constraints are discussed in detail, and the issues of unification of couplings and neutrino masses are addressed. On a technical level, our analysis uses the theory of circulant and retrocirculant matrices, the relevant parts of which are reviewed.
Masses and Internal Structure of Mesons in the String Quark Model ; The relativistic quantum string quark model, proposed earlier, is applied to all mesons, from pion to Upsilon, lying on the leading Regge trajectories i.e., to the lowest radial excitations in terms of the potential quark models. The model describes the meson mass spectrum, and comparison with measured meson masses allows one to determine the parameters of the model current quark masses, universal string tension, and phenomenological constants describing nonstring shortrange interaction. The meson Regge trajectories are in general nonlinear; practically linear are only trajectories for lightquark mesons with nonzero lowest spins. The model predicts masses of many new higherspin mesons. A new K1 meson is predicted with mass 1910 Mev. In some cases the masses of new lowspin mesons are predicted by extrapolation of the phenomenological shortrange parameters in the quark masses. In this way the model predicts the mass of etab1S0 to be 9500pm 30 MeV, and the mass of Bc0 to be 6400pm 30 MeV the potential model predictions are 100 Mev lower. The relativistic wave functions of the composite mesons allow one to calculate the energy and spin structure of mesons. The average quarkspin projections in polarized rhomeson are twice as small as the nonrelativistic quark model predictions. The spin structure of K reveals an 80 violation of the flavour SU3. These results may be relevant to understanding the spin crises'' for nucleons.
Superstring Theory and CP Violating Phases Can They Be Related ; We investigate the possibility of large CP violating phases in the soft breaking terms derived in superstring models. The bounds on the electric dipole moments EDM's of the electron and neutron are satisfied through cancellations occuring because of the structure of the string models. Three general classes of fourdimensional string models are considered i orbifold compactifications of perturbative heterotic string theory, ii scenarios based on HovravaWitten theory, and iii Type I string models Type IIB orientifolds. Nonuniversal phases of the gaugino mass parameters greatly facilitate the necessary cancellations among the various contributions to the EDM's; in the overall modulus limit, the gaugino masses are universal at tree level in both the perturbative heterotic models and the HovravaWitten scenarios, which severely restricts the allowed regions of parameter space. Nonuniversal gaugino masses do arise at oneloop in the heterotic orbifold models, providing for corners of parameter space with cal O1 phases consistent with the phenomenological bounds. However, there is a possibility of nonuniversal gaugino masses at tree level in the Type I models, depending on the details of the embedding of the SM into the D brane sectors. We find that in a minimal model with a particular embedding of the Standard Model gauge group into two D brane sectors, viable large phase solutions can be obtained over a wide range of parameter space.
Abelian family symmetries and the simplest models that give theta130 in the neutrino mixing matrix ; I construct predictive models of neutrino mass and mixing that have fewer parameters, both in the lepton sector and overall, than the default seesaw model. The predictions are theta130 and one massless neutrino, with the models having a Z4 or Z2 symmetry and just one extra degree of freedom one real singlet Higgs field. It has been shown that models with an unbroken family symmetry, and with no Higgs fields other than the Standard Model Higgs doublet produce masses and mixing matrices that have been ruled out by experiment. Therefore, this article investigates the predictions of models with Abelian family symmetries that involve Higgs singlets, doublets and triplets, in the hope that they may produce the maximal and minimal mixing angles seen in the best fit neutrino mixing matrix. I demonstrate that these models can only produce mixing angles that are zero, maximal or unconfined by the symmetry. The maximal mixing angles do not correspond to physical mixing, so an Abelian symmetry can, at best, ensure that theta130, while leaving the solar and atmospheric mixing angles as free parameters. To generate more features of the bestfit mixing matrix a model with a nonAbelian symmetry and a complicated Higgs sector would have to be used.
Phase diagram of neutral quark matter in nonlocal chiral quark models ; We consider the phase diagram of twoflavor quark matter under neutron star constraints for two nonlocal, covariant quark models within the mean field approximation. In the first case Model I the nonlocality arises from the regularization procedure, motivated by the instanton liquid model, whereas in the second one Model II a separable approximation of the onegluon exchange interaction is applied. We find that Model II predicts a larger quark mass gap and a chiral symmetry breaking CSB phase transition line which extends 1520 further into the phase diagram spanned by temperature T and chemical potential mu. The corresponding critical temperature at mu0, Tc0140 MeV, is in better accordance to recent lattice QCD results than the prediction of the standard local NJL model, which exceeds 200 MeV. For both Model I and Model II we have considered various coupling strengths in the scalar diquark channel, showing that different lowtemperature quark matter phases can occur at intermediate densities a normal quark matter NQM phase, a twoflavor superconducting 2SC quark matter phase and a mixed 2SCNQM phase. Although in most cases there is also a gapless 2SC phase, this occurs in general in a small region at nonzero temperatures, thus its effect should be negligible for compact star applications.
Multicritical Phases of the On Model on a Random Lattice ; We exhibit the multicritical phase structure of the loop gas model on a random surface. The dense phase is reconsidered, with special attention paid to the topological points g1p. This phase is complementary to the dilute and higher multicritical phases in the sense that dense models contain the same spectrum of bulk operators found in the continuum by Lian and Zuckerman but a different set of boundary operators. This difference illuminates the wellknown p,q asymmetry of the matrix chain models. Higher multicritical phases are constructed, generalizing both Kazakov's multicritical models as well as the known dilute phase models. They are quite likely related to multicritical polymer theories recently considered independently by Saleur and Zamolodchikov. Our results may be of help in defining such models on it flat honeycomb lattices; an unsolved problem in polymer theory. The phase boundaries correspond again to topological'' points with gp1 integer, which we study in some detail. Two qualitatively different types of critical points are discovered for each such g. For the special point g2 we demonstrate that the dilute phase O2 model does it not correspond to the ParisiSourlas model, a result likely to hold as well for the flat case. Instead it is proven that the first it multicritical O2 point possesses the ParisiSourlas supersymmetry.
Classical Symmetries of Some TwoDimensional Models ; It is wellknown that principal chiral models and symmetric space models in twodimensional Minkowski space have an infinitedimensional algebra of hidden symmetries. Because of the relevance of symmetric space models to duality symmetries in string theory, the hidden symmetries of these models are explored in some detail. The string theory application requires including coupling to gravity, supersymmetrization, and quantum effects. However, as a first step, this paper only considers classical bosonic theories in flat spacetime. Even though the algebra of hidden symmetries of principal chiral models is confirmed to include a KacMoody algebra or a current algebra on a circle, it is argued that a better interpretation is provided by a doubled current algebra on a semicircle or line segment. Neither the circle nor the semicircle bears any apparent relationship to the physical space. For symmetric space models the line segment viewpoint is shown to be essential, and special boundary conditions need to be imposed at the ends. The algebra of hidden symmetries also includes Virasorolike generators. For both principal chiral models and symmetric space models, the hidden symmetry stress tensor is singular at the ends of the line segment.
Supersymmetric sigma models, gauge theories and vortices ; This thesis considers one and two dimensional supersymmetric nonlinear sigma models. First there is a discussion of the geometries of one and two dimensional sigma models, with rigid supersymmetry. For the onedimensional case, the supersymmetry is promoted to a local one and the required gauge fields are introduced. The most general Lagrangian, including these gauge fields, is found. The constraints of the system are analysed, and its Dirac quantisation is investigated. In the next chapter we introduce equivariant cohomology which is used later in the thesis. Then actions are constructed for p,0 and p,1 supersymmetric, 1 leq p leq 4, twodimensional gauge theories coupled to nonlinear sigma model matter with a WessZumino term. The scalar potential for a large class of these models is derived. It is then shown that the Euclidean actions of the 2,0 and 4,0supersymmetric models without WessZumino terms are bounded by topological charges which involve the equivariant extensions of the Kahler forms of the sigma model target spaces evaluated on the twodimensional spacetime. Similar bounds for Euclidean actions of appropriate gauge theories coupled to nonlinear sigma model matter in higher spacetime dimensions are given which now involve the equivariant extensions of the Kahler forms of the sigma model target spaces and the second Chern character of gauge fields. It is found that the BPS configurations are generalisations of abelian and nonabelian vortices.
Yangian Symmetries of Matrix Models and Spin Chains The Dilatation Operator of cal N4 SYM ; We present an analysis of the Yangian symmetries of various bosonic sectors of the dilatation operator of cal N4 SYM. The analysis is presented from the point of view of Hamiltonian matrix models. In the various SUn sectors, we give a modified presentation of the Yangian generators, which are conserved on states of any size. A careful analysis of the Yangian invariance of the full SO6 sector of the scalars is also presented in this paper. We also study the Yangian invariance of the dilatation operator beyond first order perturbation theory in the SU2 sector. Following this, we derive the continuum limits of the various matrix models and reproduce the sigma model actions for fast moving strings reported in the recent literature. We motivate the constructions of continuum sigma models corresponding to both the SUn and SOn sectors as variational approximations to the matrix model Hamiltonians. These sigma models retain the semiclassical counterparts of the original Yangian symmetries of the dilatation operator. The semiclassical Yangian symmetries of the sigma models are worked out in detail. The zero curvature representation of the equations of motion and the construction of the transfer matrix for the SOn sigma model obtained as the continuum limit of the one loop bosonic dilatation operator is carried out, and the similar constructions for the SUn case are also discussed.
Heterotic SO32 model building in four dimensions ; Four dimensional heterotic SO32 orbifold models are classified systematically with model building applications in mind. We obtain all Z3, Z7 and Z2N models based on vectorial gauge shifts. The resulting gauge groups are reminiscent of those of typeI model building, as they always take the form SO2n0xUn1x...xUnN1xSO2nN. The complete twisted spectrum is determined simultaneously for all orbifold models in a parametric way depending on n0,...,nN, rather than on a model by model basis. This reveals interesting patterns in the twisted states They are always built out of vectors and antisymmetric tensors of the Un groups, and either vectors or spinors of the SO2n groups. Our results may shed additional light on the Sduality between heterotic and typeI strings in four dimensions. As a spinoff we obtain an SO10 GUT model with four generations from the Z4 orbifold.
Zamolodchikov's Tetrahedron Equation and Hidden Structure of Quantum Groups ; The tetrahedron equation is a threedimensional generalization of the YangBaxter equation. Its solutions define integrable threedimensional lattice models of statistical mechanics and quantum field theory. Their integrability is not related to the size of the lattice, therefore the same solution of the tetrahedron equation defines different integrable models for different finite periodic cubic lattices. Obviously, any such threedimensional model can be viewed as a twodimensional integrable model on a square lattice, where the additional third dimension is treated as an internal degree of freedom. Therefore every solution of the tetrahedron equation provides an infinite sequence of integrable 2d models differing by the size of this hidden third dimension. In this paper we construct a new solution of the tetrahedron equation, which provides in this way the twodimensional solvable models related to finitedimensional highest weight representations for all quantum affine algebra Uqhatsln, where the rank n coincides with the size of the hidden dimension. These models are related with an anisotropic deformation of the slninvariant Heisenberg magnets. They were extensively studied for a long time, but the hidden 3d structure was hitherto unknown. Our results lead to a remarkable exact ranksize duality relation for the nested Bethe Ansatz solution for these models. Note also, that the above solution of the tetrahedron equation arises in the quantization of the resonant threewave scattering model, which is a wellknown integrable classical system in 21 dimensions.
Exact solutions of two complementary 1D quantum manybody systems on the halfline ; We consider two particular 1D quantum manybody systems with local interactions related to the root system CN. Both models describe identical particles moving on the halfline with nontrivial boundary conditions at the origin, and they are in many ways complementary to each other. We discuss the Bethe Ansatz solution for the first model where the interaction potentials are deltafunctions, and we find that this provides an exact solution not only in the boson case but even for the generalized model where the particles are distinguishable. In the second model the particles have particular momentum dependent interactions, and we find that it is nontrivial and exactly solvable by Bethe Ansatz only in case the particles are fermions. This latter model has a natural physical interpretation as the nonrelativistic limit of the massive Thirring model on the halfline. We establish a duality relation between the bosonic deltainteraction model and the fermionic model with local momentum dependent interactions. We also elaborate on the physical interpretation of these models. In our discussion the YangBaxter relations and the Reflection equation play a central role.
A Global Model of Decay HalfLives Using Neural Networks ; Statistical modeling of nuclear data using artificial neural networks ANNs and, more recently, support vector machines SVMs, is providing novel approaches to systematics that are complementary to phenomenological and semimicroscopic theories. We present a global model of betadecay halflives of the class of nuclei that decay 100 by beta mode in their ground states. A fullyconnected multilayered feed forward network has been trained using the LevenbergMarquardt algorithm, Bayesian regularization, and crossvalidation. The halflife estimates generated by the model are discussed and compared with the available experimental data, with previous results obtained with neural networks, and with estimates coming from traditional global nuclear models. Predictions of the new neuralnetwork model are given for nuclei far from stability, with particular attention to those involved in rprocess nucleosynthesis. This study demonstrates that in the framework of the betadecay problem considered here, global models based on ANNs can at least match the predictive performance of the best conventional global models rooted in nuclear theory. Accordingly, such statistical models can provide a valuable tool for further mapping of the nuclidic chart.
A Model for Collaboration Networks Giving Rise to a Power Law Distribution with an Exponential Cutoff ; Recently several authors have proposed stochastic evolutionary models for the growth of complex networks that give rise to powerlaw distributions. These models are based on the notion of preferential attachment leading to the rich get richer'' phenomenon. Despite the generality of the proposed stochastic models, there are still some unexplained phenomena, which may arise due to the limited size of networks such as protein, email, actor and collaboration networks. Such networks may in fact exhibit an exponential cutoff in the powerlaw scaling, although this cutoff may only be observable in the tail of the distribution for extremely large networks. We propose a modification of the basic stochastic evolutionary model, so that after a node is chosen preferentially, say according to the number of its inlinks, there is a small probability that this node will become inactive. We show that as a result of this modification, by viewing the stochastic process in terms of an urn transfer model, we obtain a powerlaw distribution with an exponential cutoff. Unlike many other models, the current model can capture instances where the exponent of the distribution is less than or equal to two. As a proof of concept, we demonstrate the consistency of our model empirically by analysing the Mathematical Research collaboration network, the distribution of which is known to follow a power law with an exponential cutoff.
Deriving thermal latticeBoltzmann models from the continuous Boltzmann equation theoretical aspects ; The particles model, the collision model, the polynomial development used for the equilibrium distribution, the time discretization and the velocity discretization are factors that let the lattice Boltzmann framework LBM far away from its conceptual support the continuous Boltzmann equation BE. Most collision models are based on the BGK, single parameter, relaxationterm leading to constant Prandtl numbers. The polynomial expansion used for the equilibrium distribution introduces an upperbound in the local macroscopic speed. Most widely used time discretization procedures give an explicit numerical scheme with secondorder time step errors. In thermal problems, quadrature did not succeed in giving discrete velocity sets able to generate multispeed regular lattices. All these problems, greatly, difficult the numerical simulation of LBM based algorithms. In present work, the systematic derivation of latticeBoltzmann models from the continuous Boltzmann equation is discussed. The collision term in the linearized Boltzmann equation is modeled by expanding the distribution function in Hermite tensors. Thermohydrodynamic macroscopic equations are correctly retrieved with a secondorder model. Velocity discretization is the most critical step in establishing regularlattices framework. In the quadrature process, it is shown that the integrating variable has an important role in defining the equilibrium distribution and the latticeBoltzmann model, leading, alternatively, to temperature dependent velocities TDV and to temperature dependent weights TDW latticeBoltzmann models.
On a multitimescale statistical feedback model for volatility fluctuations ; We study, both analytically and numerically, an ARCHlike, multiscale model of volatility, which assumes that the volatility is governed by the observed past price changes on different time scales. With a powerlaw distribution of time horizons, we obtain a model that captures most stylized facts of financial time series Studentlike distribution of returns with a powerlaw tail, longmemory of the volatility, slow convergence of the distribution of returns towards the Gaussian distribution, multifractality and anomalous volatility relaxation after shocks. At variance with recent multifractal models that are strictly time reversal invariant, the model also reproduces the time assymmetry of financial time series past large scale volatility influence future small scale volatility. In order to quantitatively reproduce all empirical observations, the parameters must be chosen such that our model is close to an instability, meaning that a the feedback effect is important and substantially increases the volatility, and b that the model is intrinsically difficult to calibrate because of the very long range nature of the correlations. By imposing the consistency of the model predictions with a large set of different empirical observations, a reasonable range of the parameters value can be determined. The model can easily be generalized to account for jumps, skewness and multiasset correlations.
Complex Systems Analysis of Cell Cycling Models in Carcinogenesis ; Carcinogenesis is a complex process that involves dynamically interconnected modular subnetworks that evolve under the influence of microenvironmentally induced perturbations, in nonrandom, pseudoMarkov chain processes. An appropriate nstage model of carcinogenesis involves therefore nvalued Logic treatments of nonlinear dynamic transformations of complex functional genomes and cell interactomes. Lukasiewicz Algebraic Logic models of genetic networks and signaling pathways in cells are formulated in terms of nonlinear dynamic systems with nstate components that allow for the generalization of previous, Boolean or fuzzy, logic models of genetic activities in vivo. Such models are then applied to cell transformations during carcinogenesis based on very extensive genomic transcription and translation data from the CGAP databases supported by NCI. Such models are represented in a LukasiewiczTopos with an nvalued Lukasiewicz Algebraic Logics subobject classifier description that represents nonrandom and nonlinear network activities as well as their transformations in carcinogeness. Specific models for different types of cancer are then derived from representations of the dynamic statespace of LT nonrandom, pseudoMarkov chain process, network models in terms of cDNA and proteomic, high throughput analyses by ultrasensitive techniques. This novel theoretical analysis is based on extensive CGAP genomic data for human tumors, as well as recently published studies of cyclin signaling. Several such specific models suggest novel clinical trials and rational therapies of cancer through reestablishment of cell cycling inhibition in stage III cancers.
Statistical Predictive Models in Ecology Comparison of Performances and Assessment of Applicability ; Ecological systems are governed by complex interactions which are mainly nonlinear. In order to capture this complexity and nonlinearity, statistical models recently gained popularity. However, although these models are commonly applied in ecology, there are no studies to date aiming to assess the applicability and performance. We provide an overview for nature of the wide range of the data sets and predictive variables, from both aquatic and terrestrial ecosystems with different scales of timedependent dynamics, and the applicability and robustness of predictive modeling methods on such data sets by comparing different statistical modeling approaches. The methods considered kNN, LDA, QDA, generalized linear models GLM feedforward multilayer backpropagation networks and pseudosupervised network ARTMAP. For ecosystems involving timedependent dynamics and periodicities whose frequency are possibly less than the time scale of the data considered, GLM and connectionist neural network models appear to be most suitable and robust, provided that a predictive variable reflecting these timedependent dynamics included in the model either implicitly or explicitly. For spatial data, which does not include any timedependence comparable to the time scale covered by the data, on the other hand, neighborhood based methods such as kNN and ARTMAP proved to be more robust than other methods considered in this study. In addition, for predictive modeling purposes, first a suitable, computationally inexpensive method should be applied to the problem at hand a good predictive performance of which would render the computational cost and efforts associated with complex variants unnecessary.
Confronting LemaitreTolmanBondi models with Observational Cosmology ; The possibility that we live in a special place in the universe, close to the centre of a large void, seems an appealing alternative to the prevailing interpretation of the acceleration of the universe in terms of a LCDM model with a dominant dark energy component. In this paper we confront the asymptotically flat LemaitreTolmanBondi LTB models with a series of observations, from Type Ia Supernovae to Cosmic Microwave Background and Baryon Acoustic Oscillations data. We propose two concrete LTB models describing a local void in which the only arbitrary functions are the radial dependence of the matter density OmegaM and the Hubble expansion rate H. We find that all observations can be accommodated within 1 sigma, for our models with 4 or 5 independent parameters. The best fit models have a chi2 very close to that of the LCDM model. We perform a simple Bayesian analysis and show that one cannot exclude the hypothesis that we live within a large local void of an otherwise Einsteinde Sitter model.
ABC likelihoodfreee methods for model choice in Gibbs random fields ; Gibbs random fields GRF are polymorphous statistical models that can be used to analyse different types of dependence, in particular for spatially correlated data. However, when those models are faced with the challenge of selecting a dependence structure from many, the use of standard model choice methods is hampered by the unavailability of the normalising constant in the Gibbs likelihood. In particular, from a Bayesian perspective, the computation of the posterior probabilities of the models under competition requires special likelihoodfree simulation techniques like the Approximate Bayesian Computation ABC algorithm that is intensively used in population genetics. We show in this paper how to implement an ABC algorithm geared towards model choice in the general setting of Gibbs random fields, demonstrating in particular that there exists a sufficient statistic across models. The accuracy of the approximation to the posterior probabilities can be further improved by importance sampling on the distribution of the models. The practical aspects of the method are detailed through two applications, the test of an iid Bernoulli model versus a firstorder Markov chain, and the choice of a folding structure for two proteins.
On the problem of inflation in nonlinear multidimensional cosmological models ; We consider a multidimensional cosmological model with nonlinear quadratic R2 and quartic R4 actions. As a matter source, we include a monopole form field, Ddimensional bare cosmological constant and tensions of branes located in fixed points. In the spirit of the Universal Extra Dimensions models, the Standard Model fields are not localized on branes but can move in the bulk. We define conditions which ensure the stable compactification of the internal space in zero minimum of the effective potentials. Such effective potentials may have rather complicated form with a number of local minima, maxima and saddle points. Then, we investigate inflation in these models. It is shown that R2 and R4 models can have up to 10 and 22 efoldings, respectively. These values are not sufficient to solve the homogeneity and isotropy problem but big enough to explain the recent CMB data. Additionally, R4 model can provide conditions for eternal topological inflation. However, the main drawback of the given inflationary models consists in a value of spectral index ns which is less than observable now nsapprox 1. For example, in the case of R4 model we find ns approx 0.61.
Supercurrent coupling in the FaddeevSkyrme model ; Motivated by the sigma model limit of multicomponent GinzburgLandau theory, a version of the FaddeevSkyrme model is considered in which the scalar field is coupled dynamically to a oneform field called the supercurrent. This coupled model is investigated in the general setting where physical space is an oriented Riemannian manifold and the target space is a Kaehler manifold. It is shown that supercurrent coupling destroys the topological stability enjoyed by the usual FaddeevSkyrme model, so that there can be no globally stable knot solitons in this model. Nonetheless, local energy minimizers may still exist. The first variation formula is derived and used to construct three families of static solutions of the model, all on compact domains. In particular, a coupled version of the unitcharge hopfion on a threesphere of arbitrary radius is found. The second variation formula is derived, and used to analyze the stability of some of these solutions. A family of stable solutions is identified, though these may exist only in spaces of even dimension. Finally, it is shown that, in contrast to the uncoupled model, the coupled unit hopfion on the threesphere of radius R is unstable for all R. This gives an explicit, exact example of supercurrent coupling destabilizing a stable solution of the uncoupled FaddeevSkyrme model, and casts doubt on the conjecture of Babaev, Faddeev and Niemi that knot solitons should exist in the lowenergy regime of twocomponent superconductors.
Theory of measurementbased quantum computing ; In the study of quantum computation, data is represented in terms of linear operators which form a generalized model of probability, and computations are most commonly described as products of unitary transformations, which are the transformations which preserve the quality of the data in a precise sense. This naturally leads to unitary circuit models, which are models of computation in which unitary operators are expressed as a product of elementary unitary transformations. However, unitary transformations can also be effected as a composition of operations which are not all unitary themselves the oneway measurement model is one such model of quantum computation. In this thesis, we examine the relationship between representations of unitary operators and decompositions of those operators in the oneway measurement model. In particular, we consider different circumstances under which a procedure in the oneway measurement model can be described as simulating a unitary circuit, by considering the combinatorial structures which are common to unitary circuits and two simple constructions of oneway based procedures. These structures lead to a characterization of the oneway measurement patterns which arise from these constructions, which can then be related to efficiently testable properties of graphs. We also consider how these characterizations provide automatic techniques for obtaining complete measurementbased decompositions, from unitary transformations which are specified by operator expressions bearing a formal resemblance to path integrals. These techniques are presented as a possible means to devise new algorithms in the oneway measurement model, independently of algorithms in the unitary circuit model.
Dynamics of quantum phase transitions in Dicke and LipkinMeshkovGlick models ; We consider dynamics of Dicke models, with and without counterrotating terms, under slow variations of parameters which drive the system through a quantum phase transition. The model without counterrotating terms and sweeped detuning is seen in the contexts of a manybody generalization of the LandauZener model and the dynamical passage through a secondorder quantum phase transition QPT. Adiabaticity is destroyed when the parameter crosses a critical value. Applying semiclassical analysis based on concepts of classical adiabatic invariants and mapping to the second Painleve equation PII, we derive a formula which accurately describes particle distributions in the Hilbert space at wide range of parameters and initial conditions of the system. We find striking universal features in the particle distributions which can be probed in an experiment on Feshbach resonance passage or a cavity QED experiment. The dynamics is found to be crucially dependent on the direction of the sweep. The model with counterrotating terms has been realized recently in an experiment with ultracold atomic gases in a cavity. Its semiclassical dynamics is described by a Hamiltonian system with two degrees of freedom. Passage through a QPT corresponds to passage through a bifurcation, and can also be described by PII after averaging over fast variables, leading to similar universal distributions. Under certain conditions, the Dicke model is reduced to the LipkinMeshkovGlick model.
Reduced order models for control of fluids using the Eigensystem Realization Algorithm ; In feedback flow control, one of the challenges is to develop mathematical models that describe the fluid physics relevant to the task at hand, while neglecting irrelevant details of the flow in order to remain computationally tractable. A number of techniques are presently used to develop such reducedorder models, such as proper orthogonal decomposition POD, and approximate snapshotbased balanced truncation, also known as balanced POD. Each method has its strengths and weaknesses for instance, POD models can behave unpredictably and perform poorly, but they can be computed directly from experimental data; approximate balanced truncation often produces vastly superior models to POD, but requires data from adjoint simulations, and thus cannot be applied to experimental data. In this paper, we show that using the Eigensystem Realization Algorithm ERA citepJuPa85, one can theoretically obtain exactly the same reduced order models as by balanced POD. Moreover, the models can be obtained directly from experimental data, without the use of adjoint information. The algorithm can also substantially improve computational efficiency when forming reducedorder models from simulation data. If adjoint information is available, then balanced POD has some advantages over ERA for instance, it produces modes that are useful for multiple purposes, and the method has been generalized to unstable systems. We also present a modified ERA procedure that produces modes without adjoint information, but for this procedure, the resulting models are not balanced, and do not perform as well in examples. We present a detailed comparison of the methods, and illustrate them on an example of the flow past an inclined flat plate at a low Reynolds number.
Nonasymptotic model selection for linear non leastsquares estimation in regression models and inverse problems ; We propose to address the common problem of linear estimation in linear statistical models by using a model selection approach via penalization. Depending then on the framework in which the linear statistical model is considered namely the regression framework or the inverse problem framework, a datadriven model selection criterion is obtained either under general assumptions, or under the mild assumption of model identifiability respectively. The proposed approach was stimulated by the important recent nonasymptotic model selection results due to Birg'e and Massart mainly Birge and Massart 2007, and our results in this paper, like theirs, are nonasymptotic and turn to be sharp. Our main contribution in this paper resides in the fact that these linear estimators are not necessarily leastsquares estimators but can be any linear estimators. The proposed approach finds therefore potential applications in countless fields of engineering and applied science image science, signal processing,applied statistics, coding, to name a few in which one is interested in recovering some unknown vector quantity of interest as the one, for example, which achieves the best tradeoff between a term of fidelity to data, and a term of regularity orand parsimony of the solution. The proposed approach provides then such applications with an interesting model selection framework that allows them to achieve such a goal.
On the Consistency of Perturbativity and Gauge Coupling Unification ; We investigate constraints that the requirements of perturbativity and gauge coupling unification impose on extensions of the Standard Model and of the MSSM. In particular, we discuss the renormalization group running in several SUSY leftright symmetric and PatiSalam models and show how the various scales appearing in these models have to be chosen in order to achieve unification. We find that unification in the considered models occurs typically at scales below MminB violation 1016 GeV, implying potential conflicts with the nonobservation of proton decay. We emphasize that extending the particle content of a model in order to push the GUT scale higher or to achieve unification in the first place will very often lead to nonperturbative evolution. We generalize this observation to arbitrary extensions of the Standard Model and of the MSSM and show that the requirement of perturbativity up to MminB violation, if considered a valid guideline for model building, severely limits the particle content of any such model, especially in the supersymmetric case. However, we also discuss several mechanisms to circumvent perturbativity and proton decay issues, for example in certain classes of extra dimensional models.
Credit models and the crisis, or how I learned to stop worrying and love the CDOs ; We follow a long path for Credit Derivatives and Collateralized Debt Obligations CDOs in particular, from the introduction of the Gaussian copula model and the related implied correlations to the introduction of arbitragefree dynamic loss models capable of calibrating all the tranches for all the maturities at the same time. En passant, we also illustrate the implied copula, a method that can consistently account for CDOs with different attachment and detachment points but not for different maturities. The discussion is abundantly supported by market examples through history. The dangers and critics we present to the use of the Gaussian copula and of implied correlation had all been published by us, among others, in 2006, showing that the quantitative community was aware of the model limitations before the crisis. We also explain why the Gaussian copula model is still used in its base correlation formulation, although under some possible extensions such as random recovery. Overall we conclude that the modeling effort in this area of the derivatives market is unfinished, partly for the lack of an operationally attractive singlename consistent dynamic loss model, and partly because of the diminished investment in this research area.
Opinion dynamics with confidence threshold an alternative to the Axelrod model ; The voter model and the Axelrod model are two of the main stochastic processes that describe the spread of opinions on networks. The former includes social influence, the tendency of individuals to become more similar when they interact, while the latter also accounts for homophily, the tendency to interact more frequently with individuals which are more similar. The Axelrod model has been extensively studied during the past ten years based on numerical simulations. In contrast, we give rigorous analytical results for a generalization of the voter model that is closely related to the Axelrod model as it combines social influence and confidence threshold, which is modeled somewhat similarly to homophily. Each vertex of the network, represented by a finite connected graph, is characterized by an opinion and may interact with its adjacent vertices. Like the voter model, an interaction results in an agreement between both interacting vertices social influence but unlike the voter model, an interaction takes place if and only if the vertices' opinions are within a certain distance confidence threshold. In a deterministic static approach, we first give lower and upper bounds for the maximum number of opinions that can be supported by the network as a function of the confidence threshold and various characteristics of the graph. The number of opinions coexisting at equilibrium is then investigated in a probabilistic dynamic approach for the stochastic process starting from a random configuration ...
The Conceptual Integration Modeling Framework Abstracting from the Multidimensional Model ; Data warehouses are overwhelmingly built through a bottomup process, which starts with the identification of sources, continues with the extraction and transformation of data from these sources, and then loads the data into a set of data marts according to desired multidimensional relational schemas. End user business intelligence tools are added on top of the materialized multidimensional schemas to drive decision making in an organization. Unfortunately, this bottomup approach is costly both in terms of the skilled users needed and the sheer size of the warehouses. This paper proposes a topdown framework in which data warehousing is driven by a conceptual model. The framework offers both design time and run time environments. At design time, a business user first uses the conceptual modeling language as a multidimensional object model to specify what business information is needed; then she maps the conceptual model to a preexisting logical multidimensional representation. At run time, a system will transform the user conceptual model together with the mappings into views over the logical multidimensional representation. We focus on how the user can conceptually abstract from an existing data warehouse, and on how this conceptual model can be mapped to the logical multidimensional representation. We also give an indication of what query language is used over the conceptual model. Finally, we argue that our framework is a step along the way to allowing automatic generation of the data warehouse.
Doubly Exponential Solution for Randomized Load Balancing Models with Markovian Arrival Processes and PH Service Times ; In this paper, we provide a novel matrixanalytic approach for studying doubly exponential solutions of randomized load balancing models also known as supermarket models with Markovian arrival processes MAPs and phasetype PH service times. We describe the supermarket model as a system of differential vector equations by means of density dependent jump Markov processes, and obtain a closedform solution with a doubly exponential structure to the fixed point of the system of differential vector equations. Based on this, we show that the fixed point can be decomposed into the product of two factors inflecting arrival information and service information, and further find that the doubly exponential solution to the fixed point is not always unique for more general supermarket models. Furthermore, we analyze the exponential convergence of the current location of the supermarket model to its fixed point, and apply the Kurtz Theorem to study density dependent jump Markov process given in the supermarket model with MAPs and PH service times, which leads to the Lipschitz condition under which the fraction measure of the supermarket model weakly converges the system of differential vector equations. This paper gains a new understanding of how workload probing can help in load balancing jobs with nonPoisson arrivals and nonexponential service times.
Testing gravity using the growth of large scale structure in the Universe ; Future galaxy surveys hope to distinguish between the dark energy and modified gravity scenarios for the accelerating expansion of the Universe using the distortion of clustering in redshift space. The aim is to model the form and size of the distortion to infer the rate at which large scale structure grows. We test this hypothesis and assess the performance of current theoretical models for the redshift space distortion using large volume Nbody simulations of the gravitational instability process. We simulate competing cosmological models which have identical expansion histories one is a quintessence dark energy model with a scalar field and the other is a modified gravity model with a time varying gravitational constant and demonstrate that they do indeed produce different redshift space distortions. This is the first time this approach has been verified using a technique that can follow the growth of structure at the required level of accuracy. Our comparisons show that theoretical models for the redshift space distortion based on linear perturbation theory give a surprisingly poor description of the simulation results. Furthermore, the application of such models can give rise to catastrophic systematic errors leading to incorrect interpretation of the observations. We show that an improved model is able to extract the correct growth rate. Further enhancements to theoretical models of redshift space distortions, calibrated against simulations, are needed to fully exploit the forthcoming high precision clustering measurements.
On the size of data structures used in symbolic model checking ; Temporal Logic Model Checking is a verification method in which we describe a system, the model, and then we verify whether some properties, expressed in a temporal logic formula, hold in the system. It has many industrial applications. In order to improve performance, some tools allow preprocessing of the model, verifying online a set of properties reusing the same compiled model; we prove that the complexity of the Model Checking problem, without any preprocessing or preprocessing the model or the formula in a polynomial data structure, is the same. As a result preprocessing does not always exponentially improve performance. Symbolic Model Checking algorithms work by manipulating sets of states, and these sets are often represented by BDDs. It has been observed that the size of BDDs may grow exponentially as the model and formula increase in size. As a side result, we formally prove that a superpolynomial increase of the size of these BDDs is unavoidable in the worst case. While this exponential growth has been empirically observed, to the best of our knowledge it has never been proved so far in general terms. This result not only holds for all types of BDDs regardless of the variable ordering, but also for more powerful data structures, such as BEDs, RBCs, MTBDDs, and ADDs.
Existence of random gradient states ; We consider two versions of random gradient models. In model A the interface feels a bulk term of random fields while in model B the disorder enters through the potential acting on the gradients. It is well known that for gradient models without disorder there are no Gibbs measures in infinitevolume in dimension d2, while there are gradient Gibbs measures describing an infinitevolume distribution for the gradients of the field, as was shown by Funaki and Spohn. Van Enter and Kulske proved that adding a disorder term as in model A prohibits the existence of such gradient Gibbs measures for general interaction potentials in d2. In the present paper we prove the existence of shiftcovariant gradient Gibbs measures with a given tilt uin mathbbRd for model A when dgeq3 and the disorder has mean zero, and for model B when dgeq1. When the disorder has nonzero mean in model A, there are no shiftcovariant gradient Gibbs measures for dge3. We also prove similar results of existencenonexistence of the surface tension for the two models and give the characteristic properties of the respective surface tensions.
Phenomenology of supersymmetric neutrino mass models ; The origin of neutrino masses is currently one of the most intriguing questions of particle physics and many extensions of the Standard Model have been proposed in that direction. This experimental evidence is a very robust indication of new physics, but is not the only reason to go beyond the Standard Model. The existence of some theoretical issues supports the idea of a wider framework, supersymmetry being the most popular one. In this thesis, several supersymmetric neutrino mass models have been studied. In the first part, the phenomenology of models with bilinearlike Rparity violation is discussed in great detail, highlighting the most distinctive signatures at colliders and low energy experiments. In particular, the correlations between the LSP decay and neutrino physics are shown to be a powerful tool to put this family of models under experimental test. Other important signatures are investigated as well, like majoron emission in charged lepton decays for the case of models with spontaneous breaking of Rparity. A very different approach is followed in the second part of the thesis. Supersymmetric models with a LeftRight symmetry have all the ingredients to incorporate a typeI seesaw mechanism for neutrino masses and conserve Rparity at low energies. In this case, which only allows for indirect tests, the generation of neutrino masses at the high seesaw scale is encoded at low energies in the slepton soft masses. Contrary to minimal seesaw models, sizeable flavor violation in the right slepton sector is expected. Its experimental observation would be a clear hint of an underlying LeftRight symmetry, providing valuable information about physics at very high energies.
Recent MEG Results and Predictive SO10 Models ; Recent MEG results of a search for the lepton flavor violating LFV muon decay, mu to e gamma, show 3 events as the best value for the number of signals in the maximally likelihood fit. Although this result is still far from the evidencediscovery in statistical point of view, it might be a sign of a certain new physics beyond the Standard Model. As has been wellknown, supersymmetric SUSY models can generate the mu to e gamma decay rate within the search reach of the MEG experiment. A certain class of SUSY grand unified theory GUT models such as the minimal SUSY SO10 model we call this class of models predictive SO10 models can unambiguously determine fermion Yukawa coupling matrices, in particular, the neutrino Dirac Yukawa matrix. Based on the universal boundary conditions for soft SUSY breaking parameters at the GUT scale, we calculate the rate of the mu to e gamma process by using the completely determined Dirac Yukawa matrix in two examples of predictive SO10 models. If we interpret the 3 events in MEG experiment as a positive signal and combine it with other experimental constraints such as the relic density of the neutralino dark matter and recent results on muon g2, we can pin down a parameter set of the universal boundary conditions. Then, we propose benchmark sparticle mass spectra for each predictive SO10 model, which will be tested at the Large Hadronic Collider.
Investigation of QuasiRealistic Heterotic String Models with Reduced Higgs Spectrum ; Quasirealistic heteroticstring models in the free fermionic formulation typically contain an anomalous U1, which gives rise to a FayetIliopolous term that breaks supersymmetry at the oneloop level in string perturbation theory. Supersymmetry is restored by imposing F and Dflatness on the vacuum. In Phys. Rev. D 78 2008 046009, we presented a three generation free fermionic standardlike model which did not admit stringent F and Dflat directions, and argued that the all the moduli in the model are fixed. The particular property of the model was the reduction of the untwisted Higgs spectrum by a combination of symmetric and asymmetric boundary conditions with respect to the internal fermions associated with the compactified dimensions. In this paper we extend the analysis of free fermionic models with reduced Higgs spectrum to the cases in which the SO10 symmetry is left unbroken, or is reduced to the flipped SU5 subgroup. We show that all the models that we study in this paper do admit stringent flat directions. The only examples of models that do not admit stringent flat directions remain the strandardlike models of reference Phys. Rev. D 78 2008 046009.
Integrability vs Supersymmetry Poisson Structures of The Pohlmeyer Reduction ; We construct recursively an infinite number of Poisson structures for the supersymmetric integrable hierarchy governing the Pohlmeyer reduction of superstring sigma models on the target spaces AdSntimes Sn, n2,3,5. These Poisson structures are all nonlocal and not relativistic except one, which is the canonical Poisson structure of the semisymmetric space sineGordon model SSSSG. We verify that the superposition of the first three Poisson structures corresponds to the canonical Poisson structure of the reduced sigma model. Using the recursion relations we construct commuting charges on the reduced sigma model out of those of the SSSSG model and in the process we explain the integrable origin of the Zukhovsky map and the twisted inner product used in the sigma model side. Then, we compute the complete Poisson superalgebra for the conserved DrinfeldSokolov supercharges associated to an exotic kind of extended nonlocal rigid 2d supersymmetry recently introduced in the SSSSG context. The superalgebra has a kink central charge which turns out to be a generalization to the SSSSG models of the wellknown central extensions of the N1 sineGordon and N2 complex sineGordon model Poisson superalgebras computed from 2d superspace. The computation is done in two different ways concluding the proof of the existence of 2d supersymmetry in the reduced sigma model phase space under the boost invariant SSSSG Poisson structure.
Implementing Humanlike Intuition Mechanism in Artificial Intelligence ; Human intuition has been simulated by several research projects using artificial intelligence techniques. Most of these algorithms or models lack the ability to handle complications or diversions. Moreover, they also do not explain the factors influencing intuition and the accuracy of the results from this process. In this paper, we present a simple series based model for implementation of humanlike intuition using the principles of connectivity and unknown entities. By using Poker hand datasets and Car evaluation datasets, we compare the performance of some wellknown models with our intuition model. The aim of the experiment was to predict the maximum accurate answers using intuition based models. We found that the presence of unknown entities, diversion from the current problem scenario, and identifying weakness without the normal logic based execution, greatly affects the reliability of the answers. Generally, the intuition based models cannot be a substitute for the logic based mechanisms in handling such problems. The intuition can only act as a support for an ongoing logic based model that processes all the steps in a sequential manner. However, when time and computational cost are very strict constraints, this intuition based model becomes extremely important and useful, because it can give a reasonably good performance. Factors affecting intuition are analyzed and interpreted through our model.
Application of Bayesian model inadequacy criterion for multiple data sets to radial velocity models of exoplanet systems ; We present a simple mathematical criterion for determining whether a given statistical model does not describe several independent sets of measurements, or data modes, adequately. We derive this criterion for two data sets and generalise it to several sets by using the Bayesian updating of the posterior probability density. To demonstrate the usage of the criterion, we apply it to observations of exoplanet host stars by reanalysing the radial velocities of HD 217107, Gliese 581, and upsion Andromedae and show that the currently used models are not necessarily adequate in describing the properties of these measurements. We show that while the two data sets of Gliese 581 can be modelled reasonably well, the noise model of HD 217107 needs to be revised. We also reveal some biases in the radial velocities of upsion Andromedae and report updated orbital parameters for the recently proposed 4planet model. Because of the generality of our criterion, no assumptions are needed on the nature of the measurements, models, or model parameters. The method we propose can be applied to any astronomical problems, as well as outside the field of astronomy, because it is a simple consequence of the Bayes' rule of conditional probabilities.
A first estimate of radio halo statistics from largescale cosmological simulation ; We present a first estimate based on a cosmological gasdynamics simulation of galaxy cluster radio halo counts to be expected in forthcoming lowfrequency radio surveys. Our estimate is based on a FLASH simulation of the LCDM model for which we have assigned radio power to clusters via a model that relates radio emissivity to cluster magnetic field strength, intracluster turbulence, and density. We vary several free parameters of this model and find that radio halo number counts vary by up to a factor of two for average magnetic fields ranging from 0.2 to 3.1 uG. However, we predict significantly fewer lowfrequency radio halos than expected from previous semianalytic estimates, although this discrepancy could be explained by frequencydependent radio halo probabilities as predicted in reacceleration models. We find that upcoming surveys will have difficulty in distinguishing models because of large uncertainties and low number counts. Additionally, according to our modeling we find that expected number counts can be degenerate with both reacceleration and hadronic secondary models of cosmic ray generation. We find that relations between radio power and mass and Xray luminosity may be used to distinguish models, and by building mock radio sky maps we demonstrate that surveys such as LOFAR may have sufficient resolution and sensitivity to break this model degeneracy by imaging many individual clusters.
Flat Central Density Profile and Constant DM Surface Density in Galaxies from Scalar Field Dark Matter ; The scalar field dark matter SFDM model proposes that galaxies form by condensation of a scalar field SF very early in the universe forming BoseEinstein Condensates BEC drops, i.e., in this model haloes of galaxies are gigantic drops of SF. Here big structures form like in the LCDM model, by hierarchy, thus all the predictions of the LCDM model at big scales are reproduced by SFDM. This model predicts that all galaxies must be very similar and exist for bigger redshifts than in the LCDM model. In this work we show that BEC dark matter haloes fit highresolution rotation curves of a sample of thirteen low surface brightness galaxies. We compare our fits to those obtained using a NavarroFrenkWhite and PseudoIsothermal PI profiles and found a better agreement with the SFDM and PI profiles. The mean value of the logarithmic inner density slopes is 0.27 0.18. As a second result we find a natural way to define the core radius with the advantage of being modelindependent. Using this new definition in the BEC density profile we find that the recent observation of the constant dark matter central surface density can be reproduced. We conclude that in light of the difficulties that the standard model is currently facing the SFDM model can be a worthy alternative to keep exploring further.
Sea Ice Brightness Temperature as a Function of Ice Thickness, Part II Computed curves for thermodynamically modelled ice profiles ; Ice thickness is an important variable for climate scientists and is still difficult to accurately determine from microwave radiometer measurements. There has been some success detecting the thickness of thin ice and with this in mind this study attempts to model the thicknessradiance relation of sea ice at frequencies employed by the Soil Moisture and Ocean Salinity SMOS radiometer and the Advanced Microwave Scanning Radiometer AMSR between 1.4 and 89 GHz. In the first part of the study, the salinity of the ice was determined by a pair of empirical relationships, while the temperature was determined by a thermodynamic model. Because the thermodynamic model can be used as a simple ice growth model, in this, second part, the salinities are determined by the growth model. Because the model uses two, constantweather scenarios representing two extremes fall freezeup and winter cold snap, brine expulsion is modelled with a single correctionstep founded on mass conservation. The growth model generates realistic salinity profiles, however it overestimates the bulk salinity because gravity drainage is not accounted for. The results suggest that the formation of skim on the ice surface is important in determining the radiance signature of thin ice, especially at lower frequencies, while scattering is important mainly at higher frequencies but at all ice thicknesses.
Continuous time Boolean modeling for biological signaling application of Gillespie algorithm ; This article presents an algorithm that allows modeling of biological networks in a qualitative framework with continuous time. Mathematical modeling is used as a systems biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and to predict the effect of perturbations. We propose a modeling approach that is intrinsically continuous in time. The algorithm presented here fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. The values of transition rates have a natural interpretation it is the inverse of the time for the transition to occur. Mathematically, this approach can be translated in a set of ordinary differential equations on probability distributions; therefore, it can be seen as an approach in between quantitative and qualitative. We developed a C software, MaBoSS, that is able to simulate such a system by applying Kinetic MonteCarlo or Gillespie algorithm in the Boolean state space. This software, parallelized and optimized, computes temporal evolution of probability distributions and can also estimate stationary distributions. Applications of Boolean Kinetic MonteCarlo have been demonstrated for two qualitative models a toy model and a published p53Mdm2 model. Our approach allows to describe kinetic phenomena which were difficult to handle in the original models. In particular, transient effects are represented by time dependent probability distributions, interpretable in terms of cell populations.
Models of cuspy triaxial stellar systems. I. Stability and chaoticity ; We used the Nbody code of Hernquist and Ostriker 1992 to build a dozen cuspy gammaapprox 1 triaxial models of stellar systems through dissipationless collapses of initially spherical distributions of 106 particles. We chose four sets of initial conditions that resulted in models morphologically resembling E2, E3, E4 and E5 galaxies, respectively. Within each set, three different seed numbers were selected for the random number generator used to create the initial conditions, so that the three models of each set are statistically equivalent. We checked the stability of our models using the values of their central densities and of their moments of inertia, which turned out to be very constant indeed. The changes of those values were all less than 3 per cent over one Hubble time and, moreover, we show that the most likely cause of those changes are relaxation effects in the numerical code. We computed the six Lyapunov exponents of nearly 5,000 orbits in each model in order to recognize regular, partially and fully chaotic orbits. All the models turned out to be highly chaotic, with less than 25 per cent of their orbits being regular. We conclude that it is quite possible to obtain cuspy triaxial stellar models that contain large fractions of chaotic orbits and are highly stable. The difficulty to build such models with the method of Schwarzschild 1979 should be attributed to the method itself and not to physical causes.
A MultiScale Model for Correlation in B Cell VDJ Usage of Zebrafish ; The zebrafish emphDanio rerio is one of the model animals for study of immunology because the dynamics in the adaptive immune system of zebrafish are similar to that in higher animals. In this work, we built a multiscale model to simulate the dynamics of B cells in the primary and secondary immune responses of zebrafish. We use this model to explain the reported correlation between VDJ usage of B cell repertoires in individual zebrafish. We use a delay ordinary differential equation ODE system to model the immune responses in the 6month lifespan of a zebrafish. This mean field theory gives the number of high affinity B cells as a function of time during an infection. The sequences of those B cells are then taken from a distribution calculated by a microscopic random energy model. This generalized NK model shows that mature B cells specific to one antigen largely possess a single VDJ recombination. The model allows firstprinciples calculation of the probability, p, that two zebrafish responding to the same antigen will select the same VDJ recombination. This probability p increases with the B cell population size and the B cell selection intensity. The probability p decreases with the B cell hypermutation rate. The multiscale model predicts correlations in the immune system of the zebrafish that are highly similar to that from experiment.
Quantum Group Theory in 2model, Duality of 2model and XXZmodel with Cyclic bf Uqsl2representation for bf qn 1, and Chiral Potts Model ; We identify the quantum group LargetextslUtextslwsl2 in the Loperator of tau2model for a generic textslw as a subalgebra of Usf q sl2 with textslw sf q2. In the roots of unity case, sf qq, textslw omega with qbf n omegaN 1, the eigenvalues and eigenvectors of XXZmodel with the Uq sl2cyclic representation are determined by the tau2model with the induced LargetextslUomegasl2cyclic representation, which is decomposed as a finite sum of tau2models in nonsuperintegrable inhomogeneous Nstate chiral Potts model. Through the theory of chiral Potts model, the Qoperator of XXZmodel can be identified with the related chiral Potts transfer matrices, with special features appeared in the bf n2N, e.g. N even, case. We also establish the duality of tau2models related to cyclic representations of Uq sl2, analogous to the tau2duality in chiral Potts model; and identify the model dual to the XXZ model with Uq sl2cyclic representation.
HighDimensional Covariance Decomposition into Sparse Markov and Independence Domains ; In this paper, we present a novel framework incorporating a combination of sparse models in different domains. We posit the observed data as generated from a linear combination of a sparse Gaussian Markov model with a sparse precision matrix and a sparse Gaussian independence model with a sparse covariance matrix. We provide efficient methods for decomposition of the data into two domains, viz Markov and independence domains. We characterize a set of sufficient conditions for identifiability and model consistency. Our decomposition method is based on a simple modification of the popular ell1penalized maximumlikelihood estimator ell1MLE. We establish that our estimator is consistent in both the domains, i.e., it successfully recovers the supports of both Markov and independence models, when the number of samples n scales as n Omegad2 log p, where p is the number of variables and d is the maximum node degree in the Markov model. Our conditions for recovery are comparable to those of ell1MLE for consistent estimation of a sparse Markov model, and thus, we guarantee successful highdimensional estimation of a richer class of models under comparable conditions. Our experiments validate these results and also demonstrate that our models have better inference accuracy under simple algorithms such as loopy belief propagation.
Discriminative Learning via Semidefinite Probabilistic Models ; Discriminative linear models are a popular tool in machine learning. These can be generally divided into two types The first is linear classifiers, such as support vector machines, which are well studied and provide stateoftheart results. One shortcoming of these models is that their output known as the 'margin' is not calibrated, and cannot be translated naturally into a distribution over the labels. Thus, it is difficult to incorporate such models as components of larger systems, unlike probabilistic based approaches. The second type of approach constructs class conditional distributions using a nonlinearity e.g. loglinear models, but is occasionally worse in terms of classification error. We propose a supervised learning method which combines the best of both approaches. Specifically, our method provides a distribution over the labels, which is a linear function of the model parameters. As a consequence, differences between probabilities are linear functions, a property which most probabilistic models e.g. loglinear do not have. Our model assumes that classes correspond to linear subspaces rather than to half spaces. Using a relaxed projection operator, we construct a measure which evaluates the degree to which a given vector 'belongs' to a subspace, resulting in a distribution over labels. Interestingly, this view is closely related to similar concepts in quantum detection theory. The resulting models can be trained either to maximize the margin or to optimize average likelihood measures. The corresponding optimization problems are semidefinite programs which can be solved efficiently. We illustrate the performance of our algorithm on real world datasets, and show that it outperforms 2nd order kernel methods.
A Pointprocess Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation ; Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process neuronal spikes while the input is another high frequency point process light flashes. We further develop a generalized linear model approach to model the relationships between two point processes, employing additive pointprocess response functions. The resulting model, Pointprocess Responses for Optogenetics PRO, provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior areaunderthe curve value as high as 93 for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions andor experimental conditions by comparing the PRO parameters.
Demographic noise and resilience in a semiarid ecosystem model ; The scarcity of water characterising drylands forces vegetation to adopt appropriate survival strategies. Some of these generate watervegetation feedback mechanisms that can lead to spatial selforganisation of vegetation, as it has been shown with models representing plants by a density of biomass, varying continuously in time and space. However, although plants are usually quite plastic they also display discrete qualities and stochastic behaviour. These features may give rise to demographic noise, which in certain cases can influence the qualitative dynamics of ecosystem models. In the present work we explore the effects of demographic noise on the resilience of a model semiarid ecosystem. We introduce a spatial stochastic ecohydrological hybrid model in which plants are modelled as discrete entities subject to stochastic dynamical rules, while the dynamics of surface and soil water are described by continuous variables. The model has a deterministic approximation very similar to previous continuous models of arid and semiarid ecosystems. By means of numerical simulations we show that demographic noise can have important effects on the extinction and recovery dynamics of the system. In particular we find that the stochastic model escapes extinction under a wide range of conditions for which the corresponding deterministic approximation predicts absorption into desert states.
Onedimensional model of inertial pumping ; A onedimensional model of inertial pumping is introduced and solved. The pump is driven by a highpressure vapor bubble generated by a microheater positioned asymmetrically in a microchannel. The bubble is approximated as a shortterm impulse delivered to the two fluidic columns inside the channel. Fluid dynamics is described by a Newtonlike equation with a variable mass, but without the mass derivative term. Because of smaller inertia, the short column refills the channel faster and accumulates a larger mechanical momentum. After bubble collapse the total fluid momentum is nonzero, resulting in a net flow. Two different versions of the model are analyzed in detail, analytically and numerically. In the symmetrical model, the pressure at the channelreservoir connection plane is assumed constant, whereas in the asymmetrical model it is reduced by a Bernoulli term. For low and intermediate vapor bubble pressures, both models predict the existence of an optimal microheater location. The predicted net flow in the asymmetrical model is smaller by a factor of about 2. For unphysically large vapor pressures, the asymmetrical model predicts saturation of the effect, while in the symmetrical model net flow increases indefinitely. Pumping is reduced by nonzero viscosity, but to a different degree depending on the microheater location.
Stratocumulus over SouthEast Pacific Idealized 2D simulations with the Lagrangian Cloud Model ; In this paper a LES model with Lagrangian representation of microphysics is used to simulate stratucumulus clouds in idealized 2D setup based on the VOCALS observations. The general features of the cloud simulated by the model, such as cloud water mixing ratio and cloud droplet number profile agree well with the observations. The model can capture observed relation between aerosol distribution and concentration measured below the cloud and cloud droplet number. Averaged over the whole cloud droplet spectrum from the numerical model and observed droplet spectrum are similar, with the observations showing a higher concentration of droplets bigger than 25 mum. Much bigger differences are present when comparing modelled and observed droplet spectrum on specific model level. Despite the fact that microphysics is formulated in a Lagrangian framework the standard deviation of the cloud droplet distribution is larger than 1 mum. There is no significant narrowing of the cloud droplet distribution in the updrafts, but the distribution in the updrafts is narrower than in the downdrafts. Modelled and observed standard deviation profiles agree well with observations for moderatehigh cloud droplet numbers, with much narrower than observed droplet spectrum for low droplet number. Model results show that a significant percentage of droplets containing aerosol bigger than 0.3 mum didn't reach activation radius, yet exceeding 1 mum, what is typically measured as a cloud droplets. Also, the relationship between aerosol sizes and cloud droplet sizes is complex; there is a broad range of possible cloud droplet sizes for a given aerosol size.
Gaussian Process Regression with Heteroscedastic or NonGaussian Residuals ; Gaussian Process GP regression models typically assume that residuals are Gaussian and have the same variance for all observations. However, applications with inputdependent noise heteroscedastic residuals frequently arise in practice, as do applications in which the residuals do not have a Gaussian distribution. In this paper, we propose a GP Regression model with a latent variable that serves as an additional unobserved covariate for the regression. This model which we call GPLC allows for heteroscedasticity since it allows the function to have a changing partial derivative with respect to this unobserved covariate. With a suitable covariance function, our GPLC model can handle a Gaussian residuals with inputdependent variance, or b nonGaussian residuals with inputdependent variance, or c Gaussian residuals with constant variance. We compare our model, using synthetic datasets, with a model proposed by Goldberg, Williams and Bishop 1998, which we refer to as GPLV, which only deals with case a, as well as a standard GP model which can handle only case c. Markov Chain Monte Carlo methods are developed for both modelsl. Experiments show that when the data is heteroscedastic, both GPLC and GPLV give better results smaller mean squared error and negative logprobability density than standard GP regression. In addition, when the residual are Gaussian, our GPLC model is generally nearly as good as GPLV, while when the residuals are nonGaussian, our GPLC model is better than GPLV.
Convective overshoot mixing in stellar interior models ; The convective overshoot mixing plays an important role in stellar structure and evolution. However, the overshoot mixing is a long standing problem. The uncertainty of the overshoot mixing is one of the most uncertain factors in stellar physics. As it is well known, the convective and overshoot mixing is determined by the radial chemical component flux. In this paper, a local model of the radial chemical component flux is established based on the hydrodynamic equations and some model assumptions. The model is tested in stellar models. The main conclusions are as follows. i The local model shows that the convective and overshoot mixing could be regarded as a diffusion process, and the diffusion coefficient for different chemical element is the same. However, if the nonlocal terms, i.e., the turbulent convective transport of radial chemical component flux, are taken into account, the diffusion coefficient for each chemical element should be in general different. ii The diffusion coefficient of convective overshoot mixing shows different behaviors in convection zone and in overshoot region because the characteristic length scale of the mixing is large in the convection zone and small in the overshoot region. The overshoot mixing should be regarded as a weak mixing process. iii The result of the diffusion coefficient of mixing is tested in stellar models. It is found that a single choice of our central mixing parameter leads to consistent results for a solar convective envelope model as well as for core convection models of stars with mass from 2M to 10M.
Constraints on the TensortoScalar ratio for nonpowerlaw models ; Recent cosmological observations hint at a deviation from the simple powerlaw form of the primordial spectrum of curvature perturbations. In this paper we show that in the presence of a tensor component, a turnover in the initial spectrum is preferred by current observations, and hence nonpowerlaw models ought to be considered. For instance, for a powerlaw parameterisation with both a tensor component and running parameter, current data show a preference for a negative running at more than 2.5sigma C.L. As a consequence of this deviation from a powerlaw, constraints on the tensortoscalar ratio r are slightly broader. We also present constraints on the inflationary parameters for a modelindependent reconstruction and the Lasenby Doran LD model. In particular, the constraints on the tensortoscalar ratio from the LD model are rrm LD0.11pm0.024. In addition to current data, we show expected constraints from Plancklike and CMBPol sensitivity experiments by using MarkovChainMonteCarlo sampling chains. For all the models, we have included the Bayesian Evidence to perform a model selection analysis. The Bayes factor, using current observations, shows a strong preference for the LD model over the standard powerlaw parameterisation, and provides an insight into the accuracy of differentiating models through future surveys.
The formation of IRIS diagnostics I. A quintessential model atom of Mg II and general formation properties of the Mg II hk lines ; NASA's Interface Region Imaging Spectrograph IRIS space mission will study how the solar atmosphere is energized. IRIS contains an imaging spectrograph that covers the Mg II hk lines as well as a slitjaw imager centered at Mg II k. Understanding the observations will require forward modeling of Mg II hk line formation from 3D radiationMHD models. This paper is the first in a series where we undertake this forward modeling. We discuss the atomic physics pertinent to hk line formation, present a quintessential model atom that can be used in radiative transfer computations and discuss the effect of partial redistribution PRD and 3D radiative transfer on the emergent line profiles. We conclude that Mg II hk can be modeled accurately with a 4level plus continuum Mg II model atom. Ideally radiative transfer computations should be done in 3D including PRD effects. In practice this is currently not possible. A reasonable compromise is to use 1D PRD computations to model the line profile up to and including the central emission peaks, and use 3D transfer assuming complete redistribution to model the central depression.
Stringent Restriction from the Growth of LargeScale Structure on Apparent Acceleration in Inhomogeneous Cosmological Models ; Probes of cosmic expansion constitute the main basis for arguments to support or refute a possible apparent acceleration due to different expansion rates in the universe as described by inhomogeneous cosmological models. We present in this Letter a separate argument based on results from an analysis of the growth rate of largescale structure in the universe as modeled by the inhomogeneous cosmological models of Szekeres. We use the models with no assumptions of spherical or axial symmetries. We find that while the Szekeres models can fit very well the observed expansion history without a Lambda, they fail to produce the observed latetime suppression in the growth unless Lambda is added to the dynamics. A simultaneous fit to the supernova and growth factor data shows that the cold dark matter model with a cosmological constant LambdaCDM provides consistency with the data at a confidence level of 99.65 while the Szekeres model without Lambda achieves only a 60.46 level. When the data sets are considered separately, the Szekeres with no Lambda fits the supernova data as well as the LambdaCDM does, but provides a very poor fit to the growth data with only 31.31 consistency level compared to 99.99 for the LambdaCDM. This absence of latetime growth suppression in inhomogeneous models without a Lambda is consolidated by a physical explanation.
Distortion of genealogical properties when the sample is very large ; Study sample sizes in human genetics are growing rapidly, and in due course it will become routine to analyze samples with hundreds of thousands if not millions of individuals. In addition to posing computational challenges, such large sample sizes call for carefully reexamining the theoretical foundation underlying commonlyused analytical tools. Here, we study the accuracy of the coalescent, a central model for studying the ancestry of a sample of individuals. The coalescent arises as a limit of a large class of random mating models and it is an accurate approximation to the original model provided that the population size is sufficiently larger than the sample size. We develop a method for performing exact computation in the discretetime WrightFisher DTWF model and compare several key genealogical quantities of interest with the coalescent predictions. For realistic demographic scenarios, we find that there are a significant number of multiple and simultaneousmerger events under the DTWF model, which are absent in the coalescent by construction. Furthermore, for large sample sizes, there are noticeable differences in the expected number of rare variants between the coalescent and the DTWF model. To balance the tradeoff between accuracy and computational efficiency, we propose a hybrid algorithm that utilizes the DTWF model for the recent past and the coalescent for the more distant past. Our results demonstrate that the hybrid method with only a handful of generations of the DTWF model leads to a frequency spectrum that is quite close to the prediction of the full DTWF model.