text
stringlengths
62
2.94k
Large BRh tau mu in Supersymmetric Models ; We analyze the Lepton Flavor Violating LFV Higgs decay h tau mu in three supersymmetric models Minimal Supersymmetric Standard Model MSSM, Supersymmetric Seesaw Model SSM, and Supersymmetric BL model with Inverse Seesaw BLSSMIS. We show that in generic MSSM, with nonuniversal slepton masses andor trilinear couplings, it is not possible to enhance BRh tau mu without violating the experimental bound on the BRtau mu gamma. In SSM, where flavor mixing is radiatively generated, the LFV process mu e gamma strictly constrains the parameter space and the maximum value of BRh tau mu is of order 1010, which is extremely smaller than the recent results reported by the CMS and ATLAS experiments. In BLSSMIS, with universal soft SUSY breaking terms at the grand unified scale, we emphasize that the measured values of BRh tau mu can be accommodated in a wide region of parameter space without violating LFV constraints. Thus, confirming the LFV Higgs decay results will be a clear signal of BLSSMIS type of models. Finally, the signal of h tau mu in the BLSSMIS at the LHC, which has a tiny background, is analyzed.
Dual Learning for Machine Translation ; While neural machine translation NMT is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training. However, human labeling is very costly. To tackle this training data bottleneck, we develop a duallearning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a duallearning game. This mechanism is inspired by the following observation any machine translation task has a dual task, e.g., EnglishtoFrench translation primal versus FrenchtoEnglish translation dual; the primal and dual tasks can form a closed loop, and generate informative feedback signals to train the translation models, even if without the involvement of a human labeler. In the duallearning mechanism, we use one agent to represent the model for the primal task and the other agent to represent the model for the dual task, then ask them to teach each other through a reinforcement learning process. Based on the feedback signals generated during this process e.g., the languagemodel likelihood of the output of a model, and the reconstruction error of the original sentence after the primal and dual translations, we can iteratively update the two models until convergence e.g., using the policy gradient methods. We call the corresponding approach to neural machine translation emphdualNMT. Experiments show that dualNMT works very well on EnglishleftrightarrowFrench translation; especially, by learning from monolingual data with 10 bilingual data for warm start, it achieves a comparable accuracy to NMT trained from the full bilingual data for the FrenchtoEnglish translation task.
Effects of additive noise on the stability of glacial cycles ; It is well acknowledged that the sequence of glacialinterglacial cycles is paced by the astronomical forcing. However, how much is the sequence robust against natural fluctuations associated, for example, with the chaotic motions of atmosphere and oceans In this article, the stability of the glacialinterglacial cycles is investigated on the basis of simple conceptual models. Specifically, we study the influence of additive white Gaussian noise on the sequence of the glacial cycles generated by stochastic versions of several loworder dynamical system models proposed in the literature. In the original deterministic case, the models exhibit different types of attractors a quasiperiodic attractor, a piecewise continuous attractor, strange nonchaotic attractors, and a chaotic attractor. We show that the combination of the quasiperiodic astronomical forcing and additive fluctuations induce a form of temporarily quantised instability. More precisely, climate trajectories corresponding to different noise realizations generally cluster around a small number of stable or transiently stable trajectories present in the deterministic system. Furthermore, these stochastic trajectories may show sensitive dependence on very small amounts of perturbations at key times. Consistently with the complexity of each attractor, the number of trajectories leaking from the clusters may range from almost zero the model with a quasiperiodic attractor to a significant fraction of the total the model with a chaotic attractor, the models with strange nonchaotic attractors being intermediate. Finally, we discuss the implications of this investigation for research programmes based on numerical simulators.
Benchmarking inverse statistical approaches for protein structure and design with exactly solvable models ; Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments MSA are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model LP to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design stabilization of native conformation and negative design destabilization of competing folds. In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independentsite models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations.
Chaos in a nonautonomous nonlinear system describing asymmetric water wheels ; We use physical principles to derive a water wheel model under the assumption of an asymmetric water wheel for which the water inflow rate is in general unsteady modeled by an arbitrary function of time. Our model allows one to recover the asymmetric water wheel with steady flow rate, as well as the symmetric water wheel, as special cases. Under physically reasonable assumptions we then reduce the underlying model into a nonautonomous nonlinear system. In order to determine parameter regimes giving chaotic dynamics in this nonautonomous nonlinear system, we consider an application of competitive modes analysis. In order to apply this method to a nonautonomous system, we are required to generalize the competitive modes analysis so that it is applicable to nonautonomous systems. The nonautonomous nonlinear water wheel model is shown to satisfy competitive modes conditions for chaos in certain parameter regimes, and we employ the obtained parameter regimes to construct the chaotic attractors. As anticipated, the asymmetric unsteady water wheel exhibits more disorder than does the asymmetric steady water wheel, which in turn is less regular than the symmetric steady state water wheel. Our results suggest that chaos should be fairly ubiquitous in the asymmetric water wheel model with unsteady inflow of water.
When Do Birds of a Feather Flock Together kMeans, Proximity, and Conic Programming ; Given a set of data, one central goal is to group them into clusters based on some notion of similarity between the individual objects. One of the most popular and widelyused approaches is kmeans despite the computational hardness to find its global minimum. We study and compare the properties of different convex relaxations by relating them to corresponding proximity conditions, an idea originally introduced by Kumar and Kannan. Using conic duality theory, we present an improved proximity condition under which the PengWei relaxation of kmeans recovers the underlying clusters exactly. Our proximity condition improves upon Kumar and Kannan, and is comparable to that of Awashti and Sheffet where proximity conditions are established for projective kmeans. In addition, we provide a necessary proximity condition for the exactness of the PengWei relaxation. For the special case of equal cluster sizes, we establish a different and completely localized proximity condition under which the AminiLevina relaxation yields exact clustering, thereby having addressed an open problem by Awasthi and Sheffet in the balanced case. Our framework is not only deterministic and modelfree but also comes with a clear geometric meaning which allows for further analysis and generalization. Moreover, it can be conveniently applied to analyzing various data generative models such as the stochastic ball models and Gaussian mixture models. With this method, we improve the current minimum separation bound for the stochastic ball models and achieve the stateoftheart results of learning Gaussian mixture models.
3D Object Discovery and Modeling Using Single RGBD Images Containing Multiple Object Instances ; Unsupervised object modeling is important in robotics, especially for handling a large set of objects. We present a method for unsupervised 3D object discovery, reconstruction, and localization that exploits multiple instances of an identical object contained in a single RGBD image. The proposed method does not rely on segmentation, scene knowledge, or user input, and thus is easily scalable. Our method aims to find recurrent patterns in a single RGBD image by utilizing appearance and geometry of the salient regions. We extract keypoints and match them in pairs based on their descriptors. We then generate triplets of the keypoints matching with each other using several geometric criteria to minimize false matches. The relative poses of the matched triplets are computed and clustered to discover sets of triplet pairs with similar relative poses. Triplets belonging to the same set are likely to belong to the same object and are used to construct an initial object model. Detection of remaining instances with the initial object model using RANSAC allows to further expand and refine the model. The automatically generated object models are both compact and descriptive. We show quantitative and qualitative results on RGBD images with various objects including some from the Amazon Picking Challenge. We also demonstrate the use of our method in an object picking scenario with a robotic arm.
Unsupervised ContextSensitive Spelling Correction of English and Dutch Clinical FreeText with Word and Character NGram Embeddings ; We present an unsupervised contextsensitive spelling correction method for clinical freetext that uses word and character ngram embeddings. Our method generates misspelling replacement candidates and ranks them according to their semantic fit, by calculating a weighted cosine similarity between the vectorized representation of a candidate and the misspelling context. To tune the parameters of this model, we generate selfinduced spelling error corpora. We perform our experiments for two languages. For English, we greatly outperform offtheshelf spelling correction tools on a manually annotated MIMICIII test set, and counter the frequency bias of a noisy channel model, showing that neural embeddings can be successfully exploited to improve upon the stateoftheart. For Dutch, we also outperform an offtheshelf spelling correction tool on manually annotated clinical records from the Antwerp University Hospital, but can offer no empirical evidence that our method counters the frequency bias of a noisy channel model in this case as well. However, both our contextsensitive model and our implementation of the noisy channel model obtain high scores on the test set, establishing a stateoftheart for Dutch clinical spelling correction with the noisy channel model.
Electricity Market Theory Based on Continuous Time Commodity Model ; The recent research report of U.S. Department of Energy prompts us to reexamine the pricing theories applied in electricity market design. The theory of spot pricing is the basis of electricity market design in many countries, but it has two major drawbacks one is that it is still based on the traditional hourly schedulingdispatch model, ignores the crucial time continuity in electric power production and consumption and does not treat the intertemporal constraints seriously; the second is that it assumes that the electricity products are homogeneous in the same dispatch period and cannot distinguish the base, intermediate and peak power with obviously different technical and economic characteristics. To overcome the shortcomings, this paper presents a continuous time commodity model of electricity, including spot pricing model and load duration model. The market optimization models under the two pricing mechanisms are established with the Riemann and Lebesgue integrals respectively and the functional optimization problem are solved by the EulerLagrange equation to obtain the market equilibria. The feasibility of pricing according to load duration is proved by strict mathematical derivation. Simulation results show that load duration pricing can correctly identify and value different attributes of generators, reduce the total electricity purchasing cost, and distribute profits among the power plants more equitably. The theory and methods proposed in this paper will provide new ideas and theoretical foundation for the development of electric power markets.
A Robust and Unified Framework for Estimating Heritability in Twin Studies using Generalized Estimating Equations ; The development of a complex disease is an intricate interplay of genetic and environmental factors. Heritability is defined as the proportion of total trait variance due to genetic factors within a given population. Studies with monozygotic MZ and dizygotic DZ twins allow us to estimate heritability by fitting an ACE model which estimates the proportion of trait variance explained by additive genetic A, common shared environment C, and unique nonshared environmental E latent effects, thus helping us better understand disease risk and etiology. In this paper, we develop a flexible generalized estimating equations framework GEE2 for fitting twin ACE models that requires minimal distributional assumptions, rather only the first two moments need to be correctly specified. We prove that two commonly used methods for estimating heritability, the normal ACE model NACE and Falconer's method, can both be fit within this unified GEE2 framework, which additionally provides robust standard errors. Although the traditional Falconer's method cannot directly adjust for covariates, we show that the corresponding GEE2 version GEE2Falconer can incorporate covariate effects for both mean and variancelevel parameters e.g. let heritability vary by sex or age. Given nonnormal data, we show that the GEE2 models attain significantly better coverage of the true heritability compared to the traditional NACE and Falconer's methods. Finally, we demonstrate an important scenario where the NACE model produces biased estimates of heritability while Falconer's method remains unbiased. Overall, we recommend using the robust and flexible GEE2Falconer model for estimating heritability in twin studies.
Semantic Code Repair using NeuroSymbolic Transformation Networks ; We study the problem of semantic code repair, which can be broadly defined as automatically fixing nonsyntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of realworld source code that has been augmented with synthetically injected bugs. Our framework adopts a twostage approach where first a large set of repair candidates are generated by rulebased processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture 1 generates a shared encoding of the source code using an RNN over the abstract syntax tree, 2 scores each candidate repair using specialized network modules, and 3 then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a realworld test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41 of the time with a single guess, compared to 13 accuracy for an attentional sequencetosequence model.
The quoter model a paradigmatic model of the social flow of written information ; We propose a model for the social flow of information in the form of text data, which simulates the posting and sharing of short social media posts. Nodes in a graph representing a social network take turns generating words, leading to a symbolic time series associated with each node. Information propagates over the graph via a quoting mechanism, where nodes randomly copy short segments of text from each other. We characterize information flows from these text via informationtheoretic estimators, and we derive analytic relationships between model parameters and the values of these estimators. We explore and validate the model with simulations on small network motifs and larger random graphs. Tractable models such as ours that generate symbolic data while controlling the information flow allow us to test and compare measures of information flow applicable to real social media data. In particular, by choosing different network structures, we can develop test scenarios to determine whether or not measures of information flow can distinguish between true and spurious interactions, and how topological network properties relate to information flow.
Modelling stochastic resonance in humans the influence of lapse rate ; Adding noise to a sensory signal generally decreases human performance. However noise can improve performance too, due to a process called stochastic resonance SR. This paradoxical effect may be exploited in psychophysical experiments, to provide additional insights into how the sensory system deals with noise. Here, I develop a model for stochastic resonance to study the influence of noise on human perception, in which the biological parameter of lapse rate' was included. I show that the inclusion of lapse rate allows for the occurrence of stochastic resonance in terms of the performance metric d'. At the same time, I show that high levels of lapse rate cause stochastic resonance to disappear. It is also shown that noise generated in the brain i.e., internal noise may obscure any effect of stochastic resonance in experimental settings. I further relate the model to a standard equivalent noise model, the linear amplifier model, and show that the lapse rate can function to scale the threshold versus noise TvN curve, similar to the efficiency parameter in equivalent noise EN models. Therefore, lapse rate provides a psychophysical explanation for reduced efficiency in EN paradigms. Furthermore, I note that ignoring lapse rate may lead to an overestimation of internal noise in equivalent noise paradigms. Overall, describing stochastic resonance in terms of signal detection theory, with the inclusion of lapse rate, may provide valuable new insights into how human performance depends on internal and external noise.
Nonexchangeable random partition models for microclustering ; Many popular random partition models, such as the Chinese restaurant process and its twoparameter extension, fall in the class of exchangeable random partitions, and have found wide applicability in modelbased clustering, population genetics, ecology or network analysis. While the exchangeability assumption is sensible in many cases, it has some strong implications. In particular, Kingman's representation theorem implies that the size of the clusters necessarily grows linearly with the sample size; this feature may be undesirable for some applications, as recently pointed out by Miller et al. 2015. We present here a flexible class of nonexchangeable random partition models which are able to generate partitions whose cluster sizes grow sublinearly with the sample size, and where the growth rate is controlled by one parameter. Along with this result, we provide the asymptotic behaviour of the number of clusters of a given size, and show that the model can exhibit a powerlaw behavior, controlled by another parameter. The construction is based on completely random measures and a Poisson embedding of the random partition, and inference is performed using a Sequential Monte Carlo algorithm. Additionally, we show how the model can also be directly used to generate sparse multigraphs with powerlaw degree distributions and degree sequences with sublinear growth. Finally, experiments on real datasets emphasize the usefulness of the approach compared to a twoparameter Chinese restaurant process.
Revival of the DeserWoodard nonlocal gravity model Comparison of the original nonlocal form and a localized formulation ; We examine the origin of two opposite results for the growth of perturbations in the DeserWoodard DW nonlocal gravity model. One group previously analyzed the model in its original nonlocal form and showed that the growth of structure in the DW model is enhanced compared to general relativity GR and thus concluded that the model was ruled out. Recently, however, another group has reanalyzed it by localizing the model and found that the growth in their localized version is suppressed even compared to the one in GR. The question was whether the discrepancy originates from an intrinsic difference between the nonlocal and localized formulations or is due to their different implementations of the subhorizon limit. We show that the nonlocal and local formulations give the same solutions for the linear perturbations as long as the initial conditions are set the same. The different implementations of the subhorizon limit lead to different transient behaviors of some perturbation variables; however, they do not affect the growth of matter perturbations at the subhorizon scale much. In the meantime, we also report an error in the numerical calculation code of the former group and verify that after fixing the error the nonlocal version also gives the suppressed growth. Finally, we discuss two alternative definitions of the effective gravitational constant taken by the two groups and some open problems.
Calibration Concordance for Astronomical Instruments via Multiplicative Shrinkage ; Calibration data are often obtained by observing several wellunderstood objects simultaneously with multiple instruments, such as satellites for measuring astronomical sources. Analyzing such data and obtaining proper concordance among the instruments is challenging when the physical source models are not well understood, when there are uncertainties in known physical quantities, or when data quality varies in ways that cannot be fully quantified. Furthermore, the number of model parameters increases with both the number of instruments and the number of sources. Thus, concordance of the instruments requires careful modeling of the mean signals, the intrinsic source differences, and measurement errors. In this paper, we propose a logNormal hierarchical model and a more general logt model that respect the multiplicative nature of the mean signals via a halfvariance adjustment, yet permit imperfections in the mean modeling to be absorbed by residual variances. We present analytical solutions in the form of power shrinkage in special cases and develop reliable MCMC algorithms for general cases. We apply our method to several data sets obtained with a variety of Xray telescopes such as Chandra. We demonstrate that our method provides helpful and practical guidance for astrophysicists when adjusting for disagreements among instruments.
Routecostassignment with joint user and operator behavior as a manytoone stable matching assignment game ; We propose a generalized market equilibrium model using assignment game criteria for evaluating transportation systems that consist of both operators' and users' decisions. The model finds stable pricing, in terms of generalized costs, and matches between user populations in a network to set of routes with line capacities. The proposed model gives a set of stable outcomes instead of single point pricing that allows operators to design ticket pricing, routesschedules that impact accessegress, shared policies that impact waittransfer costs, etc., based on a desired mechanism or policy. The set of stable outcomes is proven to be convex from which assignmentdependent unique useroptimal and operatoroptimal outcomes can be obtained. Different user groups can benefit from using this model in a prescriptive manner or within a sequential design process. We look at several different examples to test our model small examples of fixed transit routes and a case study using a small subset of taxi data in NYC. The case study illustrates how one can use the model to evaluate a policy that can require passengers to walk up to 1 block away to meet with a shared taxi without turning away passengers.
Gaussian Process Regression for Arctic Coastal Erosion Forecasting ; Arctic coastal morphology is governed by multiple factors, many of which are affected by climatological changes. As the season length for shorefast ice decreases and temperatures warm permafrost soils, coastlines are more susceptible to erosion from storm waves. Such coastal erosion is a concern, since the majority of the population centers and infrastructure in the Arctic are located near the coasts. Stakeholders and decision makers increasingly need models capable of scenariobased predictions to assess and mitigate the effects of coastal morphology on infrastructure and land use. Our research uses Gaussian process models to forecast Arctic coastal erosion along the Beaufort Sea near Drew Point, AK. Gaussian process regression is a datadriven modeling methodology capable of extracting patterns and trends from datasparse environments such as remote Arctic coastlines. To train our model, we use annual coastline positions and nearshore summer temperature averages from existing datasets and extend these data by extracting additional coastlines from satellite imagery. We combine our calibrated models with future climate models to generate a range of plausible future erosion scenarios. Our results show that the Gaussian process methodology substantially improves yearly predictions compared to linear and nonlinear least squares methods, and is capable of generating detailed forecasts suitable for use by decision makers.
Fracton Models on General ThreeDimensional Manifolds ; Fracton models, a collection of exotic gapped lattice Hamiltonians recently discovered in three spatial dimensions, contain some 'topological' features they support fractional bulk excitations dubbed fractons, and a ground state degeneracy that is robust to local perturbations. However, because previous fracton models have only been defined and analyzed on a cubic lattice with periodic boundary conditions, it is unclear to what extent a notion of topology is applicable. In this paper, we demonstrate that the Xcube model, a prototypical typeI fracton model, can be defined on general threedimensional manifolds. Our construction revolves around the notion of a singular compact total foliation of the spatial manifold, which constructs a lattice from intersecting stacks of parallel surfaces called leaves. We find that the ground state degeneracy depends on the topology of the leaves and the pattern of leaf intersections. We further show that such a dependence can be understood from a renormalization group transformation for the Xcube model, wherein the system size can be changed by adding or removing 2D layers of topological states. Our results lead to an improved definition of fracton phase and bring to the fore the topological nature of fracton orders.
A Model of the Collapse and Evaporation of Charged Black Holes ; In this paper, a natural generalization of KMY model is proposed for the evaporation of charged black holes. Within the proposed model, the back reaction of Hawking radiation is considered. More specifically, we consider the equation Gmunu 8pi langle Tmunurangle, in which the matter content langle Tmunurangle is assumed spherically symmetric. With this equation of motion, the asymptotic behavior of the model is analyzed. Two kinds of matter contents are taken into consideration in this paper. In the first case the thinshell model, the infalling matter is simulated by a nulllike charged sphere collapsing into its center. In the second case, we consider a continuous distribution of spherical symmetric infalling nulllike charged matter. It is simulated by taking the continuous limit of many cocentric spheres collapsing into the center. We find that in the thinshell case, an event horizon forms and the shell passes through the horizon before becoming extremal, provided that it is not initially nearextremal. In the case of continuous matter distribution, we consider explicitly an extremal center covered by neutral infalling matter and find that the event horizon also forms. The black hole itself will become nearextremal eventually, leaving possibly a nonelectromagnetic energy residue less than the order of ellp4e03. The details of the behavior of these models are explicitly worked out in this paper.
Learning from Mutants Using Code Mutation to Learn and Monitor Invariants of a CyberPhysical System ; Cyberphysical systems CPS consist of sensors, actuators, and controllers all communicating over a network; if any subset becomes compromised, an attacker could cause significant damage. With access to data logs and a model of the CPS, the physical effects of an attack could potentially be detected before any damage is done. Manually building a model that is accurate enough in practice, however, is extremely difficult. In this paper, we propose a novel approach for constructing models of CPS automatically, by applying supervised machine learning to data traces obtained after systematically seeding their software components with faults mutants. We demonstrate the efficacy of this approach on the simulator of a realworld water purification plant, presenting a framework that automatically generates mutants, collects data traces, and learns an SVMbased model. Using crossvalidation and statistical model checking, we show that the learnt model characterises an invariant physical property of the system. Furthermore, we demonstrate the usefulness of the invariant by subjecting the system to 55 network and codemodification attacks, and showing that it can detect 85 of them from the data logs generated at runtime.
Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm ; Reverberation, which is generally caused by sound reflections from walls, ceilings, and floors, can result in severe performance degradation of acoustic applications. Due to a complicated combination of attenuation and timedelay effects, the reverberation property is difficult to characterize, and it remains a challenging task to effectively retrieve the anechoic speech signals from reverberation ones. In the present study, we proposed a novel integrated deep and ensemble learning algorithm IDEA for speech dereverberation. The IDEA consists of offline and online phases. In the offline phase, we train multiple dereverberation models, each aiming to precisely dereverb speech signals in a particular acoustic environment; then a unified fusion function is estimated that aims to integrate the information of multiple dereverberation models. In the online phase, an input utterance is first processed by each of the dereverberation models. The outputs of all models are integrated accordingly to generate the final anechoic signal. We evaluated the IDEA on designed acoustic environments, including both matched and mismatched conditions of the training and testing data. Experimental results confirm that the proposed IDEA outperforms single deepneuralnetworkbased dereverberation model with the same model architecture and training data.
Improving Accuracy of Electrochemical Capacitance and Solvation Energetics in FirstPrinciples Calculations ; Reliable firstprinciples calculations of electrochemical processes require accurate prediction of the interfacial capacitance, a challenge for current computationallyefficient continuum solvation methodologies. We develop a model for the double layer of a metallic electrode that reproduces the features of the experimental capacitance of Ag100 in a nonadsorbing, aqueous electrolyte, including a broad hump in the capacitance near the Potential of Zero Charge PZC, and a dip in the capacitance under conditions of low ionic strength. Using this model, we identify the necessary characteristics of a solvation model suitable for firstprinciples electrochemistry of metal surfaces in nonadsorbing, aqueous electrolytes dielectric and ionic nonlinearity, and a dielectriconly region at the interface. The dielectric nonlinearity, caused by the saturation of dipole rotational response in water, creates the capacitance hump, while ionic nonlinearity, caused by the compactness of the diffuse layer, generates the capacitance dip seen at low ionic strength. We show that none of the previously developed solvation models simultaneously meet all these criteria. We design the Nonlinear Electrochemical SoftSphere solvation model NESS which both captures the capacitance features observed experimentally, and serves as a generalpurpose continuum solvation model.
Remanufacturing cost analysis under uncertain core quality and return conditions extreme and nonextreme scenarios ; Uncertainties in core quality condition, return quantity and timing can propagate and accumulate in process cost and complicate cost assessments. However, regardless of cost assessment complexities, accurate cost models are required for successful remanufacturing operation management. In this paper, joint effects of core quality condition, return quantity, and timing on remanufacturing cost under normal and extreme return conditions is analyzed. To conduct this analysis, a novel multivariate stochastic model called Stochastic Cost of Remanufacturing Model SCoRM is developed. In building SCoRM, a Hybrid Pareto Distribution HPD, Bernoulli process, and a polynomial cost function are employed. It is discussed that core return process can be characterized as a Discrete Time Markov Chain DTMC. In a case study, SCoRM is applied to assess remanufacturing costs of steam traps of a chemical complex. Its accuracy analyzed and variations of SCoRM in predictive tasks assessed by bootstrapping technique. Through this variation analysis the best and worst cost scenarios determined. Finally, to generate comparative insights regarding predictive performance of SCoRM, the model is compared to artificial neural network, support vector machine, generalized additive model, and random forest algorithms. Results indicate that SCoRM can be efficiently utilized to analyze remanufacturing cost. Keywords Remanufacturing, extreme value theory, hybrid Pareto distribution, stochastic model.
GraphVar 2.0 A userfriendly toolbox for machine learning on functional connectivity measures ; Background We previously presented GraphVar as a userfriendly MATLAB toolbox for comprehensive graph analyses of functional brain connectivity. Here we introduce a comprehensive extension of the toolbox allowing users to seamlessly explore easily customizable decoding models across functional connectivity measures as well as additional features. New Method GraphVar 2.0 provides machine learning ML model construction, validation and exploration. Machine learning can be performed across any combination of network measures and additional variables, allowing for a flexibility in neuroimaging applications. Results In addition to previously integrated functionalities, such as network construction and graphtheoretical analyses of brain connectivity with a highspeed general linear model GLM, users can now perform customizable ML across connectivity matrices, network metrics and additionally imported variables. The new extension also provides parametric and nonparametric testing of classifier and regressor performance, data export, figure generation and high quality export. Comparison with existing methods Compared to other existing toolboxes, GraphVar 2.0 offers 1 comprehensive customization, 2 an allinone user friendly interface, 3 customizable model design and manual hyperparameter entry, 4 interactive results exploration and data export, 5 automated cueing for modelling multiple outcome variables within the same session, 6 an easy to follow introductory review. Conclusions GraphVar 2.0 allows comprehensive, userfriendly exploration of encoding GLM and decoding ML modelling approaches on functional connectivity measures making big data neuroscience readily accessible to a broader audience of neuroimaging investigators.
Collaborative Metric Learning Recommendation System Application to Theatrical Movie Releases ; Product recommendation systems are important for major movie studios during the movie greenlight process and as part of machine learning personalization pipelines. Collaborative Filtering CF models have proved to be effective at powering recommender systems for online streaming services with explicit customer feedback data. CF models do not perform well in scenarios in which feedback data is not available, in cold start situations like new product launches, and situations with markedly different customer tiers e.g., high frequency customers vs. casual customers. Generative natural language models that create useful themebased representations of an underlying corpus of documents can be used to represent new product descriptions, like new movie plots. When combined with CF, they have shown to increase the performance in cold start situations. Outside of those cases though in which explicit customer feedback is available, recommender engines must rely on binary purchase data, which materially degrades performance. Fortunately, purchase data can be combined with product descriptions to generate meaningful representations of products and customer trajectories in a convenient product space in which proximity represents similarity. Learning to measure the distance between points in this space can be accomplished with a deep neural network that trains on customer histories and on dense vectorizations of product descriptions. We developed a system based on Collaborative Deep Metric Learning CML to predict the purchase probabilities of new theatrical releases. We trained and evaluated the model using a large dataset of customer histories, and tested the model for a set of movies that were released outside of the training window. Initial experiments show gains relative to models that do not train on collaborative preferences.
Mortality data reliability in an internal model ; In this paper, we discuss the impact of some mortality data anomalies on an internal model capturing longevity risk in the Solvency 2 framework. In particular, we are concerned with abnormal cohort effects such as those for generations 1919 and 1920, for which the period tables provided by the Human Mortality Database show particularly low and high mortality rates respectively. To provide corrected tables for the three countries of interest here France, Italy and West Germany, we use the approach developed by Boumezoued 2016 for countries for which the method applies France and Italy, and provide an extension of the method for West Germany as monthly fertility histories are not sufficient to cover the generations of interest. These mortality tables are crucial inputs to stochastic mortality models forecasting future scenarios, from which the extreme 0,5 longevity improvement can be extracted, allowing for the calculation of the Solvency Capital Requirement SCR. More precisely, to assess the impact of such anomalies in the Solvency II framework, we use a simplified internal model based on three usual stochastic models to project mortality rates in the future combined with a closure table methodology for older ages. Correcting this bias obviously improves the data quality of the mortality inputs, which is of paramount importance today, and slightly decreases the capital requirement. Overall, the longevity risk assessment remains stable, as well as the selection of the stochastic mortality model. As a collateral gain of this data quality improvement, the more regular estimated parameters allow for new insights and a refined assessment regarding longevity risk.
Dark energy from attractors phenomenology and observational constraints ; The possibility of linking inflation and late cosmic accelerated expansion using the alphaattractor models has received increasing attention due to their physical motivation. In the early universe, alphaattractors provide an inflationary mechanism compatible with Planck satellite CMB observations and predictive for future gravitational wave CMB modes. Additionally alphaattractors can be written as quintessence models with a potential that connects a power law regime with a plateau or uplifted exponential, allowing a late cosmic accelerated expansion that can mimic behavior near a cosmological constant. In this paper we study a generalized dark energy alphaattractor model. We thoroughly investigate its phenomenology, including the role of all model parameters and the possibility of largescale tachyonic instability clustering. We verify the relation that 1wsim 1alpha while the gravitational wave power rsimalpha so these models predict that a signature should appear in either the primordial Bmodes or in late time deviation from a cosmological constant. We constrain the model parameters with current datasets, including the cosmic microwave background Planck 2015 angular power spectrum, polarization and lensing, baryon acoustic oscillations BOSS DR12 and supernovae Pantheon compressed. Our results show that expansion histories close to a cosmological constant exist in large regions of the parameter space, not requiring a finetuning of the parameters or initial conditions.
Linearized Flux Evolution LiFE A Technique for Rapidly Adapting Fluxes from FullPhysics Radiative Transfer Models ; Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate fullphysics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state the Linearized Flux Evolution LiFE approach. These radiative quantities describe how each model layer in a planeparallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the fullphysics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiativeconvective equilibrium states in onedimensional atmospheric models.
Optimal Designs for the Generalized Partial Credit Model ; Analyzing ordinal data becomes increasingly important in psychology, especially in the context of item response theory. The generalized partial credit model GPCM is probably the most widely used ordinal model and finds application in many large scale educational assessment studies such as PISA. In the present paper, optimal test designs are investigated for estimating persons' abilities with the GPCM for calibrated tests when item parameters are known from previous studies. We will derive that local optimality may be achieved by assigning nonzero probability only to the first and last category independently of a person's ability. That is, when using such a design, the GPCM reduces to the dichotomous 2PL model. Since locally optimal designs require the true ability to be known, we consider alternative Bayesian design criteria using weight distributions over the ability parameter space. For symmetric weight distributions, we derive necessary conditions for the optimal onepoint design of two response categories to be Bayes optimal. Furthermore, we discuss examples of common symmetric weight distributions and investigate, in which cases the necessary conditions are also sufficient. Since the 2PL model is a special case of the GPCM, all of these results hold for the 2PL model as well.
HINT A Hierarchical Independent Component Analysis Toolbox for Investigating Brain Functional Networks using Neuroimaging Data ; Independent component analysis ICA is a popular tool for investigating brain organization in neuroscience research. In fMRI studies, an important goal is to study how brain networks are modulated by subjects' clinical and demographic variables. Existing ICA methods and toolboxes don't incorporate subjects' covariates effects in ICA estimation of brain networks, which potentially leads to loss in accuracy and statistical power in detecting brain network differences between subjects' groups. We introduce a Matlab toolbox, HINT Hierarchical INdependent component analysis Toolbox, that provides a hierarchical covariateadjusted ICA hcICA for modeling and testing covariate effects and generates modelbased estimates of brain networks on both the population and individuallevel. HINT provides a userfriendly Matlab GUI that allows users to easily load images, specify covariate effects, monitor model estimation via an EM algorithm, specify hypothesis tests, and visualize results. HINT also has a command line interface which allows users to conveniently run and reproduce the analysis with a script. HINT implements a new multilevel probabilistic ICA model for group ICA. It provides a statistically principled ICA modeling framework for investigating covariate effects on brain networks. HINT can also generate and visualize modelbased network estimates for userspecified subject groups, which greatly facilitates group comparisons.
Styling with Attention to Details ; Fashion as characterized by its nature, is driven by style. In this paper, we propose a method that takes into account the style information to complete a given set of selected fashion items with a complementary fashion item. Complementary items are those items that can be worn along with the selected items according to the style. Addressing this problem facilitates in automatically generating stylish fashion ensembles leading to a richer shopping experience for users. Recently, there has been a surge of online social websites where fashion enthusiasts post the outfit of the day and other users can like and comment on them. These posts contain a goldmine of information about style. In this paper, we exploit these posts to train a deep neural network which captures style in an automated manner. We pose the problem of predicting complementary fashion items as a sequence to sequence problem where the input is the selected set of fashion items and the output is a complementary fashion item based on the style information learned by the model. We use the encoder decoder architecture to solve this problem of completing the set of fashion items. We evaluate the goodness of the proposed model through a variety of experiments. We empirically observe that our proposed model outperforms competitive baseline like apriori algorithm by 28 in terms of accuracy for top1 recommendation to complete the fashion ensemble. We also perform retrieval based experiments to understand the ability of the model to learn style and rank the complementary fashion items and find that using attention in our encoder decoder model helps in improving the mean reciprocal rank by 24. Qualitatively we find the complementary fashion items generated by our proposed model are richer than the apriori algorithm.
Transfer Learning for Clinical Time Series Analysis using Recurrent Neural Networks ; Deep neural networks have shown promising results for various clinical prediction tasks such as diagnosis, mortality prediction, predicting duration of stay in hospital, etc. However, training deep networks such as those based on Recurrent Neural Networks RNNs requires large labeled data, high computational resources, and significant hyperparameter tuning effort. In this work, we investigate as to what extent can transfer learning address these issues when using deep RNNs to model multivariate clinical time series. We consider transferring the knowledge captured in an RNN trained on several source tasks simultaneously using a large labeled dataset to build the model for a target task with limited labeled data. An RNN pretrained on several tasks provides generic features, which are then used to build simpler linear models for new target tasks without training taskspecific RNNs. For evaluation, we train a deep RNN to identify several patient phenotypes on time series from MIMICIII database, and then use the features extracted using that RNN to build classifiers for identifying previously unseen phenotypes, and also for a seemingly unrelated task of inhospital mortality. We demonstrate that i models trained on features extracted using pretrained RNN outperform or, in the worst case, perform as well as taskspecific RNNs; ii the models using features from pretrained models are more robust to the size of labeled data than taskspecific RNNs; and iii features extracted using pretrained RNN are generic enough and perform better than typical statistical handcrafted features.
The fully frustrated XY model revisited A new universality class ; The twodimensional 2d fully frustrated Planar Rotator model on a square lattice has been the subject of a long controversy due to the simultaneous Z2 and O2 symmetry existing in the model. The O2 symmetry being responsible for the Berezinskii Kosterlitz Thouless transition BKT while the Z2 drives an Isinglike transition. There are arguments supporting two possible scenarios, one advocating that the loss of Ising and BKT order take place at the same temperature Tt and the other that the Z2 transition occurs at a higher temperature than the BKT one. In the first case an immediate consequence is that this model is in a new universality class. Most of the studies take hand of some order parameter like the stiffness, Binder's cumulant or magnetization to obtain the transition temperature. Considering that the transition temperatures are obtained, in general, as an average over the estimates taken about several of those quantities, it is difficult to decide if they are describing the same or slightly separate transitions. In this paper we describe an iterative method based on the knowledge of the complex zeros of the energy probability distribution to study the critical behavior of the system. The method is general with advantages over most conventional techniques since it does not need to identify any order parameter empha priori. The critical temperature and exponents can be obtained with good precision. We apply the method to study the Fully Frustrated Planar Rotator PR and the Anisotropic Heisenberg XY models in two dimensions. We show that both models are in a new universality class with TPR0.4528632 and TXY0.3691616 and the transition exponent nu0.82430 frac1nu1.224.
Randomcluster dynamics in mathbb Z2 rapid mixing with general boundary conditions ; The randomcluster model with parameters p,q is a random graph model that generalizes bond percolation q1 and the Ising and Potts models qgeq 2. We study its Glauber dynamics on ntimes n boxes Lambdan of the integer lattice graph mathbb Z2, where the model exhibits a sharp phase transition at ppcq. Unlike traditional spin systems like the Ising and Potts models, the randomcluster model has nonlocal interactions. Longrange interactions can be imposed as external connections in the boundary of Lambdan, known as boundary conditions. For select boundary conditions that do not carry longrange information namely, wired and free, Blanca and Sinclair proved that when q1 and pneq pcq, the Glauber dynamics on Lambdan mixes in optimal On2 log n time. In this paper, we prove that this mixing time is polynomial in n for every boundary condition that is realizable as a configuration on mathbb Z2 setminus Lambdan. We then use this to prove nearoptimal tilde On2 mixing time for typical'' boundary conditions. As a complementary result, we construct classes of nonrealizable nonplanar boundary conditions inducing slow stretchedexponential mixing at pll pcq.
Mathematical Discovery of Natural Laws in Biomedical Sciences A New Methodology ; As biomedical sciences discover new layers of complexity in the mechanisms of life and disease, mathematical models trying to catch up with these developments become mathematically intractable. As a result, in the grand scheme of things, mathematical models have so far played an auxiliary role in biomedical sciences. We propose a new methodology allowing mathematical modeling to give, in certain cases, definitive answers to systemic biomedical questions that elude empirical resolution. Our methodology is based on two ideas 1 employing mathematical models that are firmly rooted in established biomedical knowledge yet so general that they can account for any, or at least many, biological mechanisms, both known and unknown; 2 finding model parameters whose likelihoodmaximizing values are independent of observations existence of such parameters implies that the model must not meet regularity conditions required for the consistency of maximum likelihood estimator. These universal parameter values may reveal general patterns that we call natural laws in biomedical processes. We illustrate this approach with the discovery of a clinically important natural law governing cancer metastasis. Specifically, we found that under minimal, and fairly realistic, mathematical and biomedical assumptions the likelihoodmaximizing scenario of metastatic cancer progression in an individual patient is invariably the same Complete suppression of metastatic growth before primary tumor resection followed by an abrupt growth acceleration after surgery. This scenario is widely observed in clinical practice and supported by a wealth of experimental studies on animals and clinical case reports published over the last 110 years. The above most likely scenario does not preclude other possibilities e.g. metastases may start aggressive growth before primary tumor resection or remain dormant after surgery.
Deep Smoothing of the Implied Volatility Surface ; We present a neural network NN approach to fit and predict implied volatility surfaces IVSs. Atypically to standard NN applications, financial industry practitioners use such models equally to replicate market prices and to value other financial instruments. In other words, low training losses are as important as generalization capabilities. Importantly, IVS models need to generate realistic arbitragefree option prices, meaning that no portfolio can lead to riskfree profits. We propose an approach guaranteeing the absence of arbitrage opportunities by penalizing the loss using soft constraints. Furthermore, our method can be combined with standard IVS models in quantitative finance, thus providing a NNbased correction when such models fail at replicating observed market prices. This lets practitioners use our approach as a plugin on top of classical methods. Empirical results show that this approach is particularly useful when only sparse or erroneous data are available. We also quantify the uncertainty of the model predictions in regions with few or no observations. We further explore how deeper NNs improve over shallower ones, as well as other properties of the network architecture. We benchmark our method against standard IVS models. By evaluating our method on both training sets, and testing sets, namely, we highlight both their capacity to reproduce observed prices and predict new ones.
Bayesian fusion and multimodal DCM for EEG and fMRI ; This paper asks whether integrating multimodal EEG and fMRI data offers a better characterisation of functional brain architectures than either modality alone. This evaluation rests upon a dynamic causal model that generates both EEG and fMRI data from the same neuronal dynamics. We introduce the use of Bayesian fusion to provide informative empirical neuronal priors derived from dynamic causal modelling DCM of EEG data for subsequent DCM of fMRI data. To illustrate this procedure, we generated synthetic EEG and fMRI timeseries for a mismatch negativity or auditory oddball paradigm, using biologically plausible model parameters i.e., posterior expectations from a DCM of empirical, open access, EEG data. Using model inversion, we found that Bayesian fusion provided a substantial improvement in marginal likelihood or model evidence, indicating a more efficient estimation of model parameters, in relation to inverting fMRI data alone. We quantified the benefits of multimodal fusion with the information gain pertaining to neuronal and haemodynamic parameters as measured by the KullbackLeibler divergence between their prior and posterior densities. Remarkably, this analysis suggested that EEG data can improve estimates of haemodynamic parameters; thereby furnishing proofofprinciple that Bayesian fusion of EEG and fMRI is necessary to resolve conditional dependencies between neuronal and haemodynamic estimators. These results suggest that Bayesian fusion may offer a useful approach that exploits the complementary temporal EEG and spatial fMRI precision of different data modalities. We envisage the procedure could be applied to any multimodal dataset that can be explained by a DCM with a common neuronal parameterisation.
Automated crater shape retrieval using weaklysupervised deep learning ; Crater ellipticity determination is a complex and time consuming task that so far has evaded successful automation. We train a state of the art computer vision algorithm to identify craters in Lunar digital elevation maps and retrieve their sizes and 2D shapes. The computational backbone of the model is MaskRCNN, an instance segmentation general framework that detects craters in an image while simultaneously producing a mask for each crater that traces its outer rim. Our postprocessing pipeline then finds the closest fitting ellipse to these masks, allowing us to retrieve the crater ellipticities. Our model is able to correctly identify 87 of known craters in the longitude range we hid from the network during training and validation test set, while predicting thousands of additional craters not present in our training data. Manual validation of a subset of these new craters indicates that a majority of them are real, which we take as an indicator of the strength of our model in learning to identify craters, despite incomplete training data. The crater size, ellipticity, and depth distributions predicted by our model are consistent with humangenerated results. The model allows us to perform a large scale search for differences in crater diameter and shape distributions between the lunar highlands and maria, and we exclude any such differences with a high statistical significance. The predicted test set catalogue and trained model are available here httpsgithub.commalidibCratersMaskRCNN.
Superconductor versus insulator in twisted bilayer graphene ; We present a simple model that we believe captures the key aspects of the competition between superconducting and insulating states in twisted bilayer graphene. Within this model, the superconducting phase is primary, and arises at generic fillings, but is interrupted by the insulator at commensurate fillings. Importantly, the insulator forms because of electronelectron interactions, but the model is agnostic as to the superconducting pairing mechanism, which need not originate with electronelectron interactions. The model is composed of a collection of crossed onedimensional quantum wires whose intersections form a superlattice. At each superlattice point, we place a locally superconducting puddle which can exchange Cooper pairs with the quantum wires. We analyze this model assuming weak wirepuddle and wirewire couplings. We show that for a range of repulsive intrawire interactions, the system is superconducting at generic' incommensurate fillings, with the superconductivity being interrupted' by an insulating phase at commensurate fillings. We further show that the gapped insulating states at commensurate fillings give way to gapless states upon application of external Zeeman fields. These features are consistent with experimental observations in magicangle twisted bilayer graphenes despite the distinct microscopic details. We further study the full phase diagram of this model and discover that it contains several distinct correlated insulating states, which we characterize herein.
A CognitionAffect Integrated Model of Emotion ; The focus of the efforts for defining and modelling emotion is broadly shifting from classical definite marker theory to statistically context situated conceptual theory. However, the role of context processing and its interaction with the affect is still not comprehensively explored and modelled. With the help of neural decoding of functional networks, we have decoded cognitive functions for 12 different basic and complex emotion conditions. Using transfer learning in deep neural architecture, we arrived at the conclusion that the core affect is unable to provide varieties of emotions unless coupled with cortical cognitive functions such as autobiographical memory, dmn, selfreferential, social, tom and salient event detection. Following our results, in this article, we present a 'cognitionaffect integrated model of emotion' which includes many cortical and subcortical regions and their interactions. Our model suggests three testable hypotheses. First, affect and physiological sensations alone are inconsequential in defining or classifying emotions until integrated with the domaingeneral cognitive systems. Second, cognition and affect modulate each other throughout the generation of meaningful instance which is situated in the current context. And, finally, the structural and temporal hierarchies in the brain's organization and anatomical projections play an important role in emotion responses in terms of hierarchical activities and their durations. The model, along with the analytical and anatomical support, is presented. The article concludes with the future research questions.
Degenerative Adversarial NeuroImage Nets Generating Images that Mimic Disease Progression ; Simulating images representative of neurodegenerative diseases is important for predicting patient outcomes and for validation of computational models of disease progression. This capability is valuable for secondary prevention clinical trials where outcomes and screening criteria involve neuroimaging. Traditional computational methods are limited by imposing a parametric model for atrophy and are extremely resourcedemanding. Recent advances in deep learning have yielded datadriven models for longitudinal studies e.g., face ageing that are capable of generating synthetic images in realtime. Similar solutions can be used to model trajectories of atrophy in the brain, although new challenges need to be addressed to ensure accurate disease progression modelling. Here we propose Degenerative Adversarial NeuroImage Net DaniNet a new deep learning approach that learns to emulate the effect of neurodegeneration on MRI by simulating atrophy as a function of ages, and disease progression. DaniNet uses an underlying set of Support Vector Regressors SVRs trained to capture the patterns of regional intensity changes that accompany disease progression. DaniNet produces whole output images, consisting of 2DMRI slices that are constrained to match regional predictions from the SVRs. DaniNet is also able to maintain the unique brain morphology of individuals. Adversarial training ensures realistic brain images and smooth temporal progression. We train our model using 9652 T1weighted longitudinal MRI extracted from the Alzheimer's Disease Neuroimaging Initiative ADNI dataset. We perform quantitative and qualitative evaluations on a separate test set of 1283 images also from ADNI demonstrating the ability of DaniNet to produce accurate and convincing synthetic images that emulate disease progression.
Kernel Methods for Surrogate Modeling ; This chapter deals with kernel methods as a special class of techniques for surrogate modeling. Kernel methods have proven to be efficient in machine learning, pattern recognition and signal analysis due to their flexibility, excellent experimental performance and elegant functional analytic background. These databased techniques provide so called kernel expansions, i.e., linear combinations of kernel functions which are generated from given inputoutput point samples that may be arbitrarily scattered. In particular, these techniques are meshless, do not require or depend on a grid, hence are less prone to the curse of dimensionality, even for highdimensional problems. In contrast to projectionbased model reduction, we do not necessarily assume a highdimensional model, but a general function that models inputoutput behavior within some simulation context. This could be some micromodel in a multiscalesimulation, some submodel in a coupled system, some initialization function for solvers, coefficient function in PDEs, etc. First, kernel surrogates can be useful if the inputoutput function is expensive to evaluate, e.g. is a result of a finite element simulation. Here, acceleration can be obtained by sparse kernel expansions. Second, if a function is available only via measurements or a few function evaluation samples, kernel approximation techniques can provide function surrogates that allow global evaluation. We present some important kernel approximation techniques, which are kernel interpolation, greedy kernel approximation and support vector regression. Pseudocode is provided for ease of reproducibility. In order to illustrate the main features, commonalities and differences, we compare these techniques on a realworld application. The experiments clearly indicate the enormous acceleration potential
Five Generic Processes for Behavior Description in Software Engineering ; Behavior modeling and software architecture specification are attracting more attention in software engineering. Describing both of them in integrated models yields numerous advantages for coping with complexity since the models are platform independent. They can be decomposed to be developed independently by experts of the respective fields, and they are highly reusable and may be subjected to formal analysis. Typically, behavior is defined as the occurrence of an action, a pattern over time, or any change in or movement of an object. In systems studies, there are many different approaches to modeling behavior, such as grounding behavior simultaneously on state transitions, natural language, and flowcharts. These different descriptions make it difficult to compare objects with each other for consistency. This paper attempts to propose some conceptual preliminaries to a definition of behavior in software engineering. The main objective is to clarify the research area concerned with system behavior aspects and to create a common platform for future research. Five generic elementary processes creating, processing, releasing, receiving, and transferring are used to form a unifying higherorder process called a thinging machine TM that is utilized as a template in modeling behavior of systems. Additionally, a TM includes memory and triggering relations among stages of processes machines. A TM is applied to many examples from the literature to examine their behavioristic aspects. The results show that a TM is a valuable tool for analyzing and modeling behavior in a system.
Sample Complexity Bounds for Influence Maximization ; Influence maximization IM is the problem of finding for a given sgeq 1 a set S of Ss nodes in a network with maximum influence. With stochastic diffusion models, the influence of a set S of seed nodes is defined as the expectation of its reachability over simulations, where each simulation specifies a deterministic reachability function. Two wellstudied special cases are the Independent Cascade IC and the Linear Threshold LT models of Kempe, Kleinberg, and Tardos. The influence function in stochastic diffusion is unbiasedly estimated by averaging reachability values over i.i.d. simulations. We study the IM sample complexity the number of simulations needed to determine a 1epsilonapproximate maximizer with confidence 1delta. Our main result is a surprising upper bound of O s tau epsilon2 ln fracndelta for a broad class of models that includes IC and LT models and their mixtures, where n is the number of nodes and tau is the number of diffusion steps. Generally tau ll n, so this significantly improves over the generic upper bound of Os n epsilon2 ln fracndelta. Our sample complexity bounds are derived from novel upper bounds on the variance of the reachability that allow for small relative error for influential sets and additive error when influence is small. Moreover, we provide a dataadaptive method that can detect and utilize fewer simulations on models where it suffices. Finally, we provide an efficient greedy design that computes an 11eepsilonapproximate maximizer from simulations and applies to any submodular stochastic diffusion model that satisfies the variance bounds.
Differentially Private Deep Learning with Smooth Sensitivity ; Ensuring the privacy of sensitive data used to train modern machine learning models is of paramount importance in many areas of practice. One approach to study these concerns is through the lens of differential privacy. In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous. A particular instance of this approach is through a teacherstudent framework, wherein the teacher, who owns the sensitive data, provides the student with useful, but noisy, information, hopefully allowing the student model to perform well on a given task without access to particular features of the sensitive data. Because stronger privacy guarantees generally involve more significant perturbation on the part of the teacher, deploying existing frameworks fundamentally involves a tradeoff between student's performance and privacy guarantee. One of the most important techniques used in previous works involves an ensemble of teacher models, which return information to a student based on a noisy voting procedure. In this work, we propose a novel voting mechanism with smooth sensitivity, which we call Immutable Noisy ArgMax, that, under certain conditions, can bear very large random noising from the teacher without affecting the useful information transferred to the student. Compared with previous work, our approach improves over the stateoftheart methods on all measures, and scale to larger tasks with both better performance and stronger privacy epsilon approx 0. This new proposed framework can be applied with any machine learning models, and provides an appealing solution for tasks that requires training on a large amount of data.
Kernelphase analysis aperture modeling prescriptions that minimize calibration errors ; Kernelphase is a data analysis method based on a generalization of the notion of closurephase invented in the context of interferometry, but that applies to well corrected diffraction dominated images produced by an arbitrary aperture. The linear model upon which it relies theoretically leads to the formation of observable quantities robust against residual aberrations. In practice, detection limits reported thus far seem to be dominated by systematic errors induced by calibration biases not sufficiently filtered out by the kernel projection operator. This paper focuses on the impact the initial modeling of the aperture has on these errors and introduces a strategy to mitigate them, using a more accurate aperture transmission model. The paper first uses idealized monochromatic simulations of a non trivial aperture to illustrate the impact modeling choices have on calibration errors. It then applies the outlined prescription to two distinct datasets of images whose analysis has previously been published. The use of a transmission model to describe the aperture results in a significant improvement over the previous type of analysis. The thus reprocessed datasets generally lead to more accurate results, less affected by systematic errors. As kernelphase observing programs are becoming more ambitious, accuracy in the aperture description is becoming paramount to avoid situations where contrast detection limits are dominated by systematic errors. Prescriptions outlined in this paper will benefit any attempt at exploiting kernelphase for highcontrast detection.
Irreversible thermodynamical description of warm inflationary cosmological models ; We investigate the interaction between scalar fields and radiation in the framework of warm inflationary models by using the irreversible thermodynamics of open systems with matter creationannihilation. We consider the scalar fields and radiation as an interacting two component cosmological fluid in a homogeneous, spatially flat and isotropic FriedmannRobertsonWalker FRW Universe. The thermodynamics of open systems as applied together with the gravitational field equations to the two component cosmological fluid leads to a generalization of the elementary scalar fieldradiation interaction model, which is the theoretical basis of warm inflationary models, with the decay creation pressures explicitly considered as parts of the cosmological fluid energymomentum tensor. Specific models describing coherently oscillating scalar waves, scalar fields with a constant potential, and scalar fields with a Higgs type potential are considered in detail. For each case exact and numerical solutions of the gravitational field equations with scalar fieldradiation interaction are obtained, and they show the transition from an accelerating inflationary phase to a decelerating one. The theoretical predictions of the warm inflationary scenario with irreversible matter creation are also compared in detail with the Planck 2018 observational data, and constraints on the free parameters of the model are obtained.
Holographic Spacetime, Newtons Law, and the Dynamics of Horizons ; We revisit the construction of models of quantum gravity in d dimensional Minkowski space in terms of random tensor models, and correct some mistakes in our previous treatment of the subject. We find a large class of models in which the large impact parameter scattering scales with energy and impact parameter like Newtons law. The scattering amplitudes in these models describe scattering of jets of particles, and also include amplitudes for the production of highly metastable states with all the parametric properties of black holes. These models have emergent energy, momentum and angular conservation laws, despite being based on time dependent Hamiltonians. The scattering amplitudes in which no intermediate black holes are produced have a timeordered Feynman diagram spacetime structure local interaction vertices connected by propagation of free particles really StermanWeinberg jets of particles. However, there are also amplitudes where jets collide to form large metastable objects, with all the scaling properties of black holes energy, entropy and temperature, as well as the characteristic time scale for the decay of perturbations. We generalize the conjecture of Sekino and Susskind, to claim that all of these models are fast scramblers. The rationale for this claim is that the interactions are invariant under fuzzy subgroups of the group of volume preserving diffeomorphisms, so that they are highly nonlocal on the holographic screen. We review how this formalism resolves the Firewall Paradox.
Compositional Convolutional Neural Networks A Deep Architecture with Innate Robustness to Partial Occlusion ; Recent findings show that deep convolutional neural networks DCNNs do not generalize well under partial occlusion. Inspired by the success of compositional models at classifying partially occluded objects, we propose to integrate compositional models and DCNNs into a unified deep model with innate robustness to partial occlusion. We term this architecture Compositional Convolutional Neural Network. In particular, we propose to replace the fully connected classification head of a DCNN with a differentiable compositional model. The generative nature of the compositional model enables it to localize occluders and subsequently focus on the nonoccluded parts of the object. We conduct classification experiments on artificially occluded images as well as real images of partially occluded objects from the MSCOCO dataset. The results show that DCNNs do not classify occluded objects robustly, even when trained with data that is strongly augmented with partial occlusions. Our proposed model outperforms standard DCNNs by a large margin at classifying partially occluded objects, even when it has not been exposed to occluded objects during training. Additional experiments demonstrate that CompositionalNets can also localize the occluders accurately, despite being trained with class labels only. The code used in this work is publicly available.
Inaccessible information in probabilistic models of quantum systems, noncontextuality inequalities and noise thresholds for contextuality ; Classical probabilistic models of noisy quantum systems are not only relevant for understanding the nonclassical features of quantum mechanics, but they are also useful for determining the possible advantage of using quantum resources for information processing tasks. A common feature of these models is the presence of inaccessible information, as captured by the concept of preparation contextuality There are ensembles of quantum states described by the same density operator, and hence operationally indistinguishable, and yet in any probabilistic ontological model, they should be described by distinct probability distributions. In this work, we quantify the inaccessible information of a model in terms of the maximum distinguishability of probability distributions associated to any pair of ensembles with identical density operators, as quantified by the total variation distance of the distributions. We obtain a family of lower bounds on this maximum distinguishability in terms of experimentally measurable quantities. In the case of an ideal qubit this leads to a lower bound of, approximately, 0.07. These bounds can also be interpreted as a new class of robust preparation noncontextuality inequalities. Our noncontextuality inequalities are phrased in terms of generalizations of maxrelative entropy and trace distance for general operational theories, which could be of independent interest. Under sufficiently strong noise any quantum system becomes preparation noncontextual, i.e., can be described by models with zero inaccessible information. Using our noncontextuality inequalities, we show that this can happen only if the noise channel has the average gate fidelity less than or equal to 1D112...1D, where D is the dimension of the Hilbert space.
Hysteretic Mutual Synchronization of PERPSTNO Pairs Analyzed by a Generalized Pendulumlike Model ; At present, the Kuramoto model is the standard and widely accepted theoretical approach for analyzing the synchronization of spintorque nanooscillators STNOs coupled by an interaction. Nevertheless, the oscillatory decaying regime as well as the initial condition ICdependence hysteretic that exist in the synchronization of many types of STNOs cannot be explained by this model. In order to more precisely elucidate the physical mechanisms behind the two phenomena, in this paper we develop a generalized pendulumlike model based on the two common features of nonlinear autooscillators one is the stability of the amplitudeenergy of dynamic states; the other is the nonlinear dynamic state energy of oscillators. In this new model, we find that the Newtonianlike particle with sufficient kinetic energy can overcome the barrier of phaselocking potential to evolve into a stable asynchronization AS state, leading to the ICdependent synchronization. Furthermore, due to the presence of the kinetic energy, this particle can also oscillate around the minima of the phaselocking potential, leading to the oscillatory decaying regime. Thereby, in this work, we adopt this new model to analyze the ICdependent mutual synchronization of perpendiculartoplane PERPSTNO pairs, and then we suggest that the initial conditions can be controlled to avoid such a phenomenon by using magnetic dipolar coupling.
A general framework for causal classification ; In many applications, there is a need to predict the effect of an intervention on different individuals from data. For example, which customers are persuadable by a product promotion which patients should be treated with a certain type of treatment These are typical causal questions involving the effect or the change in outcomes made by an intervention. The questions cannot be answered with traditional classification methods as they only use associations to predict outcomes. For personalised marketing, these questions are often answered with uplift modelling. The objective of uplift modelling is to estimate causal effect, but its literature does not discuss when the uplift represents causal effect. Causal heterogeneity modelling can solve the problem, but its assumption of unconfoundedness is untestable in data. So practitioners need guidelines in their applications when using the methods. In this paper, we use causal classification for a set of personalised decision making problems, and differentiate it from classification. We discuss the conditions when causal classification can be resolved by uplift and causal heterogeneity modelling methods. We also propose a general framework for causal classification, by using offtheshelf supervised methods for flexible implementations. Experiments have shown two instantiations of the framework work for causal classification and for uplift causal heterogeneity modelling, and are competitive with the other uplift causal heterogeneity modelling methods.
Genetic Algorithmic Parameter Optimisation of a Recurrent Spiking Neural Network Model ; Neural networks are complex algorithms that loosely model the behaviour of the human brain. They play a significant role in computational neuroscience and artificial intelligence. The next generation of neural network models is based on the spike timing activity of neurons spiking neural networks SNNs. However, model parameters in SNNs are difficult to search and optimise. Previous studies using genetic algorithm GA optimisation of SNNs were focused mainly on simple, feedforward, or oscillatory networks, but not much work has been done on optimising cortexlike recurrent SNNs. In this work, we investigated the use of GAs to search for optimal parameters in recurrent SNNs to reach targeted neuronal population firing rates, e.g. as in experimental observations. We considered a cortical column based SNN comprising 1000 Izhikevich spiking neurons for computational efficiency and biologically realism. The model parameters explored were the neuronal biased input currents. First, we found for this particular SNN, the optimal parameter values for targeted population averaged firing activities, and the convergence of algorithm by 100 generations. We then showed that the GA optimal population size was within 1620 while the crossover rate that returned the best fitness value was 0.95. Overall, we have successfully demonstrated the feasibility of implementing GA to optimise model parameters in a recurrent cortical based SNN.
Geometry and solutions of an epidemic SIS model permitting fluctuations and quantization ; Some recent works reveal that there are models of differential equations for the mean and variance of infected individuals that reproduce the SIS epidemic model at some point. This stochastic SIS epidemic model can be interpreted as a Hamiltonian system, therefore we wondered if it could be geometrically handled through the theory of LieHamilton systems, and this happened to be the case. The primordial result is that we are able to obtain a general solution for the stochastic SISepidemic model with fluctuations in form of a nonlinear superposition rule that includes particular stochastic solutions and certain constants to be related to initial conditions of the contagion process. The choice of these initial conditions will be crucial to display the expected behavior of the curve of infections during the epidemic. We shall limit these constants to nonsingular regimes and display graphics of the behavior of the solutions. As one could expect, the increase of infected individuals follows a sigmoidlike curve. LieHamiltonian systems admit a quantum deformation, so does the stochastic SISepidemic model. We present this generalization as well. If one wants to study the evolution of an SIS epidemic under the influence of a constant heat source like centrally heated buildings, one can make use of quantum stochastic differential equations coming from the socalled quantum deformation.
Test and Visualization of Covariance Properties for Multivariate SpatioTemporal Random Fields ; The prevalence of multivariate spacetime data collected from monitoring networks and satellites, or generated from numerical models, has brought much attention to multivariate spatiotemporal statistical models, where the covariance function plays a key role in modeling, inference, and prediction. For multivariate spacetime data, understanding the spatiotemporal variability, within and across variables, is essential in employing a realistic covariance model. Meanwhile, the complexity of generic covariances often makes model fitting very challenging, and simplified covariance structures, including symmetry and separability, can reduce the model complexity and facilitate the inference procedure. However, a careful examination of these properties is needed in real applications. In the work presented here, we formally define these properties for multivariate spatiotemporal random fields and use functional data analysis techniques to visualize them, hence providing intuitive interpretations. We then propose a rigorous rankbased testing procedure to conclude whether the simplified properties of covariance are suitable for the underlying multivariate spacetime data. The good performance of our method is illustrated through synthetic data, for which we know the true structure. We also investigate the covariance of bivariate wind speed, a key variable in renewable energy, over a coastal and an inland area in Saudi Arabia. The Supplementary Material is available online, including the R code for our developed methods.
Control instabilities and incite slowslip in generalized BurridgeKnopoff models ; Generalized BurridgeKnopoff GBK models display rich dynamics, characterized by instabilities and multiple bifurcations. GBK models consist of interconnected masses that can slide on a rough surface under friction. All masses are connected to a plate, which slowly provides energy to the system. The system displays long periods of quiescence, interrupted by fast, dynamic events avalanches of energy relaxation. During these events, clusters of blocks slide abruptly, simulating seismic slip and earthquake rupture. Here we propose a theory for preventing GBK avalanches, control its dynamics and incite slowslip. We exploit the dependence of friction on pressure and use it as a backdoor for altering the dynamics of the system. We use the mathematical Theory of Control and, for the first time, we succeed in a stabilizing and restricting chaos in GBK models, b guaranteeing slow frictional dissipation and c tuning the GBK system toward desirable global asymptotic equilibria of lower energy. Our control approach is robust and does not require exact knowledge of the frictional behavior of the system. Finally, GBK models are known to present SelfOrganized Critical SOC behavior. Therefore, the presented methodology shows an additional example of SOC Control SOCC. Given that the dynamics of GBK models show many analogies with earthquakes, we expect to inspire earthquake mitigation strategies regarding anthropogenic andor natural seismicity. In a wider perspective, our control approach could be used for improving understanding of cascade failures in complex systems in geophysics, access hidden characteristics and improve their predictability by controlling their spatiotemporal behavior in realtime.
Alleviating Spatial Confounding in Spatial Frailty Models ; Spatial confounding is how is called the confounding between fixed and spatial random effects. It has been widely studied and it gained attention in the past years in the spatial statistics literature, as it may generate unexpected results in modeling. The projectionbased approach, also known as restricted models, appears as a good alternative to overcome the spatial confounding in generalized linear mixed models. However, when the support of fixed effects is different from the spatial effect one, this approach can no longer be applied directly. In this work, we introduce a method to alleviate the spatial confounding for the spatial frailty models family. This class of models can incorporate spatially structured effects and it is usual to observe more than one sample unit per area which means that the support of fixed and spatial effects differs. In this case, we introduce a two folded projectionbased approach projecting the design matrix to the dimension of the space and then projecting the random effect to the orthogonal space of the new design matrix. To provide fast inference in our analysis we employ the integrated nested Laplace approximation methodology. The method is illustrated with an application with lung and bronchus cancer in California US that confirms that the methodology efficiency.
Symmetries and turbulence modeling. A critical examination ; The recent study by Klingenberg, Oberlack Pluemacher 2020 proposes a new strategy for modeling turbulence in general. A proofofconcept is presented therein for the particular flow configuration of a spatially evolving turbulent planar jet flow, coming to the conclusion that their model can generate scaling laws which go beyond the classical ones. Our comment, however, shows that their proofofconcept is flawed and that their newly proposed scaling laws do not go beyond any classical solutions. Hence, their argument of having established a new and more advanced turbulence model cannot be confirmed. The problem is already rooted in the modeling strategy itself, in that a nonphysical statistical scaling symmetry gets implemented. Breaking this symmetry will restore the internal consistency and will turn all selfsimilar solutions back to the classical ones. To note is that their model also includes a second nonphysical symmetry. One of the authors already acknowledged this fact for turbulent jet flow in a formerly published Corrigendum Sadeghi, Oberlack Gauding, 2020. However, the Corrigendum is not cited and so the reader is not made aware that their method has fundamental problems that lead to inconsistencies and conflicting results. Instead, the very same nonphysical symmetry gets published again. Yet, this unscientific behaviour is not corrected, but repeated and continued in the subsequent and further misleading publication Klingenberg Oberlack 2022, which is examined in this update in the appendix.
A Significant Increase in Detection of HighResolution Emission Spectra Using a ThreeDimensional Atmospheric Model of a Hot Jupiter ; High resolution spectroscopy has opened the way for new, detailed study of exoplanet atmospheres. There is evidence that this technique can be sensitive to the complex, threedimensional 3D atmospheric structure of these planets. In this work, we perform cross correlation analysis on high resolution R100,000 CRIRESVLT emission spectra of the Hot Jupiter HD 209458b. We generate template emission spectra from a 3D atmospheric circulation model of the planet, accounting for temperature structure and atmospheric motionswinds and planetary rotationmissed by spectra calculated from onedimensional models. In this firstofitskind analysis, we find that using template spectra generated from a 3D model produces a more significant detection 6.9 sigma of the planet's signal than any of the hundreds of onedimensional models we tested maximum of 5.1 sigma. We recover the planet's thermal emission, its orbital motion, and the presence of CO in its atmosphere at high significance. Additionally, we analyzed the relative influences of 3D temperature and chemical structures in this improved detection, including the contributions from CO and H2O, as well as the role of atmospheric Doppler signatures from winds and rotation. This work shows that the Hot Jupiter's 3D atmospheric structure has a firstorder influence on its emission spectra at high resolution and motivates the use of multidimensional atmospheric models in highresolution spectral analysis.
VIVO Visual Vocabulary PreTraining for Novel Object Captioning ; It is highly desirable yet challenging to generate image captions that can describe novel objects which are unseen in captionlabeled training data, a capability that is evaluated in the novel object captioning challenge nocaps. In this challenge, no additional imagecaption training data, other thanCOCO Captions, is allowed for model training. Thus, conventional VisionLanguage Pretraining VLP methods cannot be applied. This paper presents VIsual VOcabulary pretraining VIVO that performs pretraining in the absence of caption annotations. By breaking the dependency of paired imagecaption training data in VLP, VIVO can leverage large amounts of paired imagetag data to learn a visual vocabulary. This is done by pretraining a multilayer Transformer model that learns to align imagelevel tags with their corresponding image region features. To address the unordered nature of image tags, VIVO uses a Hungarian matching loss with masked tag prediction to conduct pretraining. We validate the effectiveness of VIVO by finetuning the pretrained model for image captioning. In addition, we perform an analysis of the visualtext alignment inferred by our model. The results show that our model can not only generate fluent image captions that describe novel objects, but also identify the locations of these objects. Our single model has achieved new stateoftheart results on nocaps and surpassed the human CIDEr score.
Matryoshka approach to SineCosine topological models ; We address a particular set of extended SuSchriefferHeeger models with 2n sites in the unit cell SSH2n, that we designate by SineCosine models SCn, with hopping terms defined as a sequence of n sinecosine pairs of the form sinthetaj,costhetaj, j1, cdots,n. These models, when squared, generate a blockdiagonal matrix representation with one of the blocks corresponding to a chain with uniform local potentials. We further focus our study on the subset of SC2n1 chains that, when squared an arbitrary number of times up to n, always generate a block which is again a SineCosine model, if an energy shift is applied and if the energy unit is renormalized. We show that these ntimes squarable models SSCn and their band structure are uniquely determined by the sequence of energy unit renormalizations and by the energy shifts associated to each step of the squaring process. Chiral symmetry is present in all SineCosine chains and edge states levels at the respective central gaps are protected by it. Zeroenergy edge states in a SSCj chain with jn of the Matryoshka sequence obtained squaring the SSCn chain with open boundary conditions OBC, become finite energy edge states in noncentral band gaps of the SSCn chain. The extension to higher dimensions is discussed.
The Balanced Mode Decomposition Algorithm for DataDriven LPV LowOrder Models of Aeroservoelastic Systems ; A novel approach to reducedorder modeling of highdimensional time varying systems is proposed. It leverages the formalism of the Dynamic Mode Decomposition technique together with the concept of balanced realization. It is assumed that the only information available on the system comes from input, state, and output trajectories generated by numerical simulations or recorded and estimated during experiments, thus the approach is fully datadriven. The goal is to obtain an inputoutput low dimensional linear model which approximates the system across its operating range. Since the dynamics of aeroservoelastic systems markedly changes in operation e.g. due to change in flight speed or altitude, timevarying features are retained in the constructed models. This is achieved by generating a Linear ParameterVarying representation made of a collection of stateconsistent linear timeinvariant reducedorder models. The algorithm formulation hinges on the idea of replacing the orthogonal projection onto the Proper Orthogonal Decomposition modes, used in Dynamic Mode Decompositionbased approaches, with a balancing oblique projection constructed entirely from data. As a consequence, the inputoutput information captured in the lowerdimensional representation is increased compared to other projections onto subspaces of same or lower size. Moreover, a parametervarying projection is possible while also achieving stateconsistency. The validity of the proposed approach is demonstrated on a morphing wing for airborne wind energy applications by comparing the performance against two algorithms recently proposed in the literature. Comparisons cover both prediction accuracy and performance in model predictive control applications.
SceML A Graphical Modeling Framework for Scenariobased Testing of Autonomous Vehicles ; Ensuring the functional correctness and safety of autonomous vehicles is a major challenge for the automotive industry. However, exhaustive physical test drives are not feasible, as billions of driven kilometers would be required to obtain reliable results. Scenariobased testing is an approach to tackle this problem and reduce necessary test drives by replacing driven kilometers with simulations of relevant or interesting scenarios. These scenarios can be generated or extracted from recorded data with machine learning algorithms or created by experts. In this paper, we propose a novel graphical scenario modeling language. The graphical framework allows experts to create new scenarios or review ones designed by other experts or generated by machine learning algorithms. The scenario description is modeled as a graph and based on behavior trees. It supports different abstraction levels of scenario description during software and test development. Additionally, the graphbased structure provides modularity and reusable subscenarios, an important use case in scenario modeling. A graphical visualization of the scenario enhances comprehensibility for different users. The presented approach eases the scenario creation process and increases the usage of scenarios within development and testing processes.
Perfectoid Diamonds and nAwareness. A MetaModel of Subjective Experience ; In this paper, we propose a mathematical model of subjective experience in terms of classes of hierarchical geometries of representations nawareness. We first outline a general framework by recalling concepts from higher category theory, homotopy theory, and the theory of infinity, 1topoi. We then state three conjectures that enrich this framework. We first propose that the infinity, 1category of a geometric structure known as perfectoid diamond is an infinity, 1topos. In order to construct a topology on the infinity, 1category of diamonds we then propose that topological localization, in the sense of GrothendieckRezkLurie infinity, 1topoi, extends to the infinity, 1category of diamonds. We provide a smallscale model using triangulated categories. Finally, our metamodel takes the form of Efimov Ktheory of the infinity, 1category of perfectoid diamonds, which illustrates structural equivalences between the category of diamonds and subjective experience i.e. its privacy, selfcontainedness, and selfreflexivity. Based on this, we investigate implications of the model. We posit a grammar ndeclension for a novel language to express nawareness, accompanied by a new temporal scheme ntime. Our framework allows us to revisit old problems in the philosophy of time how is change possible and what do we mean by simultaneity and coincidence We also examine the notion of self within our framework. A new model of personal identity is introduced which resembles a categorical version of the bundle theory; selves are not substances in which properties inhere but weakly persistent moduli spaces in the Ktheory of perfectoid diamonds.
Enhancing Model Robustness By Incorporating Adversarial Knowledge Into Semantic Representation ; Despite that deep neural networks DNNs have achieved enormous success in many domains like natural language processing NLP, they have also been proven to be vulnerable to maliciously generated adversarial examples. Such inherent vulnerability has threatened various realworld deployed DNNsbased applications. To strength the model robustness, several countermeasures have been proposed in the English NLP domain and obtained satisfactory performance. However, due to the unique language properties of Chinese, it is not trivial to extend existing defenses to the Chinese domain. Therefore, we propose AdvGraph, a novel defense which enhances the robustness of Chinesebased NLP models by incorporating adversarial knowledge into the semantic representation of the input. Extensive experiments on two realworld tasks show that AdvGraph exhibits better performance compared with previous work i effective it significantly strengthens the model robustness even under the adaptive attacks setting without negative impact on model performance over legitimate input; ii generic its key component, i.e., the representation of connotative adversarial knowledge is taskagnostic, which can be reused in any Chinesebased NLP models without retraining; and iii efficient it is a lightweight defense with sublinear computational complexity, which can guarantee the efficiency required in practical scenarios.
Nonuniqueness in quasar absorption models and implications for measurements of the fine structure constant ; High resolution spectra of quasar absorption systems provide the best constraints on temporal or spatial changes of fundamental constants in the early universe. An important systematic that has never before been quantified concerns model nonuniqueness. The absorption structure is generally complicated, comprising many blended lines. This characteristic means any given system can be fitted equally well by many slightly different models, each having a different value of alpha, the fine structure constant. We use AI Monte Carlo modelling to quantify nonuniqueness. Extensive supercomputer calculations are reported, revealing new systematic effects that guide future analyses i Whilst higher signal to noise and improved spectral resolution produces a smaller statistical uncertainty for alpha, model nonuniqueness adds a significant additional uncertainty. ii Nonuniqueness depends on the line broadening mechanism used. We show that modelling the spectral data using turbulent line broadening results in far greater nonuniqueness, hence this should no longer be done. Instead, for varying alpha studies, it is important to use the more physically appropriate compound broadening. iii We have studied two absorption systems in detail. Generalising thus requires caution. Nevertheless, if nonuniqueness is present in all or most quasar absorption systems, it seems unavoidable that attempts to determine the existence or nonexistence of spacetime variations of fundamental constants is best approached using a statistical sample.
Learning Invariant Representation with Consistency and Diversity for Semisupervised Source Hypothesis Transfer ; Semisupervised domain adaptation SSDA aims to solve tasks in target domain by utilizing transferable information learned from the available source domain and a few labeled target data. However, source data is not always accessible in practical scenarios, which restricts the application of SSDA in real world circumstances. In this paper, we propose a novel task named Semisupervised Source Hypothesis Transfer SSHT, which performs domain adaptation based on source trained model, to generalize well in target domain with a few supervisions. In SSHT, we are facing two challenges 1 The insufficient labeled target data may result in target features near the decision boundary, with the increased risk of misclassification; 2 The data are usually imbalanced in source domain, so the model trained with these data is biased. The biased model is prone to categorize samples of minority categories into majority ones, resulting in low prediction diversity. To tackle the above issues, we propose Consistency and Diversity Learning CDL, a simple but effective framework for SSHT by facilitating prediction consistency between two randomly augmented unlabeled data and maintaining the prediction diversity when adapting model to target domain. Encouraging consistency regularization brings difficulty to memorize the few labeled target data and thus enhances the generalization ability of the learned model. We further integrate Batch Nuclearnorm Maximization into our method to enhance the discriminability and diversity. Experimental results show that our method outperforms existing SSDA methods and unsupervised model adaptation methods on DomainNet, OfficeHome and Office31 datasets. The code is available at httpsgithub.comWangxd1899SSHT.
Accretion of a Vlasov gas on to a black hole from a sphere of finite radius and the role of angular momentum ; The accretion of a spherically symmetric, collisionless kinetic gas cloud on to a Schwarzschild black hole is analysed. Whereas previous studies have treated this problem by specifying boundary conditions at infinity, here the properties of the gas are given at a sphere of finite radius. The corresponding steadystate solutions are computed using four different models with an increasing level of sophistication, starting with the purely radial infall of Newtonian particles and culminating with a fully general relativistic calculation in which individual particles have angular momentum. The resulting mass accretion rates are analysed and compared with previous models, including the standard Bondi model for a hydrodynamic flow. We apply our models to the supermassive black holes Sgr A and M87, and we discuss how their low luminosity could be partially explained by a kinetic description involving angular momentum. Furthermore, we get results consistent with previous modeldependent bounds for the accretion rate imposed by rotation measures of the polarised light coming from Sgr A and with estimations of the accretion rate of M87 from the Event Horizon Telescope collaboration. Our methods and results could serve as a first approximation for more realistic black hole accretion models in various astrophysical scenarios in which the accreted material is expected to be nearly collisionless.
Direct speechtospeech translation with discrete units ; We present a direct speechtospeech translation S2ST model that translates speech from one language to speech in another language without relying on intermediate text generation. We tackle the problem by first applying a selfsupervised discrete speech encoder on the target speech and then training a sequencetosequence speechtounit translation S2UT model to predict the discrete representations of the target speech. When target text transcripts are available, we design a joint speech and text training framework that enables the model to generate dual modality output speech and text simultaneously in the same inference pass. Experiments on the Fisher SpanishEnglish dataset show that the proposed framework yields improvement of 6.7 BLEU compared with a baseline direct S2ST model that predicts spectrogram features. When trained without any text transcripts, our model performance is comparable to models that predict spectrograms and are trained with text supervision, showing the potential of our system for translation between unwritten languages. Audio samples are available at httpsfacebookresearch.github.iospeechtranslationdirects2stunitsindex.html .
Compact and Optimal Deep Learning with Recurrent Parameter Generators ; Deep learning has achieved tremendous success by training increasingly large models, which are then compressed for practical deployment. We propose a drastically different approach to compact and optimal deep learning We decouple the Degrees of freedom DoF and the actual number of parameters of a model, optimize a small DoF with predefined random linear constraints for a large model of arbitrary architecture, in onestage endtoend learning. Specifically, we create a recurrent parameter generator RPG, which repeatedly fetches parameters from a ring and unpacks them onto a large model with random permutation and sign flipping to promote parameter decorrelation. We show that gradient descent can automatically find the best model under constraints with faster convergence. Our extensive experimentation reveals a loglinear relationship between model DoF and accuracy. Our RPG demonstrates remarkable DoF reduction and can be further pruned and quantized for additional runtime performance gain. For example, in terms of top1 accuracy on ImageNet, RPG achieves 96 of ResNet18's performance with only 18 DoF the equivalent of one convolutional layer and 52 of ResNet34's performance with only 0.25 DoF Our work shows a significant potential of constrained neural optimization in compact and optimal deep learning.
Facetron A Multispeaker FacetoSpeech Model based on Crossmodal Latent Representations ; In this paper, we propose a multispeaker facetospeech waveform generation model that also works for unseen speaker conditions. Using a generative adversarial network GAN with linguistic and speaker characteristic features as auxiliary conditions, our method directly converts face images into speech waveforms under an endtoend training framework. The linguistic features are extracted from lip movements using a lipreading model, and the speaker characteristic features are predicted from face images using crossmodal learning with a pretrained acoustic model. Since these two features are uncorrelated and controlled independently, we can flexibly synthesize speech waveforms whose speaker characteristics vary depending on the input face images. We show the superiority of our proposed model over conventional methods in terms of objective and subjective evaluation results. Specifically, we evaluate the performances of linguistic features by measuring their accuracy on an automatic speech recognition task. In addition, we estimate speaker and gender similarity for multispeaker and unseen conditions, respectively. We also evaluate the aturalness of the synthesized speech waveforms using a mean opinion score MOS test and nonintrusive objective speech quality assessment NISQA.The demo samples of the proposed and other models are available at httpssam0927.github.io
An InDepth Analysis of Stochastic Kronecker Graphs ; Graph analysis is playing an increasingly important role in science and industry. Due to numerous limitations in sharing realworld graphs, models for generating massive graphs are critical for developing better algorithms. In this paper, we analyze the stochastic Kronecker graph model SKG, which is the foundation of the Graph500 supercomputer benchmark due to its favorable properties and easy parallelization. Our goal is to provide a deeper understanding of the parameters and properties of this model so that its functionality as a benchmark is increased. We develop a rigorous mathematical analysis that shows this model cannot generate a powerlaw distribution or even a lognormal distribution. However, we formalize an enhanced version of the SKG model that uses random noise for smoothing. We prove both in theory and in practice that this enhancement leads to a lognormal distribution. Additionally, we provide a precise analysis of isolated vertices, showing that the graphs that are produced by SKG might be quite different than intended. For example, between 50 and 75 of the vertices in the Graph500 benchmarks will be isolated. Finally, we show that this model tends to produce extremely small core numbers compared to most social networks and other real graphs for common parameter choices.
Luminosities of recycled radio pulsars in globular clusters ; Using Monte Carlo simulations, we model the luminosity distribution of recycled pulsars in globular clusters as the brighter, observable part of an intrinsic distribution and find that the observed luminosities can be reproduced using either lognormal or powerlaw distributions as the underlying luminosity function. For both distributions, a wide range of model parameters provide an acceptable match to the observed sample, with the lognormal function providing statistically better agreement in general than the powerlaw models. Moreover, the powerlaw models predict a parent population size that is a factor of between two and ten times higher than for the lognormal models. We note that the lognormal luminosity distribution found for the normal pulsar population by FaucherGiguere and Kaspi is consistent with the observed luminosities of globular cluster pulsars. For Terzan5, our simulations show that the sample of detectable radio pulsars, and the diffuse radio flux measurement, can be explained using the lognormal luminosity law with a parent population of sim 150 pulsars. Measurements of diffuse gammaray fluxes for several clusters can be explained by both powerlaw and lognormal models, with the lognormal distributions again providing a better match in general. In contrast to previous studies, we do not find any strong evidence for a correlation between the number of pulsars inferred in globular clusters and globular cluster parameters including metallicity and stellar encounter rate.
Analysis on a NambuJonaLasinio Model of Dynamical Supersymmetry Breaking ; This is a report on our newly proposed model of dynamical supersymmetry breaking with some details of the analysis involved. The model in the simplest version has only a chiral superfield multiplet, with a strong foursuperfield interaction in the Kahler potential that induces a real twosuperfield composite with vacuum condensate. The latter has supersymmetry breaking parts, which we show to bear nontrivial solution following basically a standard nonperturbative analysis for a NambuJonaLasinio type model on a superfield setting. The real composite superfield has a spin one component but is otherwise quite unconventional. We discuss also the parallel analysis for the effective theory with the composite. Plausible vacuum solutions are illustrated and analyzed. The supersymmetry breaking solutions have generated soft masses for the scalar avoiding the vanishing supertrace condition for the squaredmasses of the superfield components. We also present some analysis of the resulted low energy effective theory with components of the composite become dynamical. The determinant of the fermionic modes is shown to be zero illustrating the presence of the expected Goldstino. The model gives the possibility of constructing a supersymmetric standard model with all supersymmetry breaking masses generated dynamically and directly without the necessity of complicated hidden or mediating sectors.
Cosmological consequences of an adiabatic matter creation process ; In this paper we investigate the cosmological consequences of a continuous matter creation associated with the production of particles by the gravitational field acting on the quantum vacuum. To illustrate this, three phenomenological models are considered. An equivalent scalar field description is presented for each models. The effects on the cosmic microwave background power spectrum are analyzed for the first time in the context of adiabatic matter creation cosmology. Further, we introduce a model independent treatment, Om, which depends only on the Hubble expansion rate and the cosmological redshift to distinguish any cosmological model from LambdaCDM by providing a null test for the cosmological constant, meaning that, for any two redshifts z1, z2, Om z is same, i.e. Om z1 Om z2 0. Also, this diagnostic can differentiate between several cosmological models by indicating their quintessential phantom behavior without knowing the accurate value of the matter density, and the present value of the Hubble parameter. For our models, we find that particle production rate is inversely proportional to Om. Finally, the validity of the generalized second law of thermodynamics bounded by the apparent horizon has been examined.
Interacting 3form dark energy models distinguishing interactions and avoiding the Little Sibling of the Big Rip ; In this paper we consider 3form dark energy DE models with interactions in the dark sector. We aim to distinguish the phenomenological interactions that are defined through the dark matter DM and the DE energy densities. We do our analysis mainly in two stages. In the first stage, we identify the noninteracting 3form DE model which generically leads to an abrupt latetime cosmological event which is known as the little sibling of the Big Rip LSBR. We classify the interactions which can possibly avoid this latetime abrupt event. We also study the parameter space of the model that is consistent with the interaction between DM and DE energy densities at present as indicated by recent studies based on BAO and SDSS data. In the later stage, we observationally distinguish those interactions using the statefinder hierarchy parameters S31,,, S41 ,,, S31,,, S51 . We also compute the growth factor parameter epsilonz for the various interactions we consider herein and use the composite null diagnostic CND S31,,,epsilonz as a tool to characterise those interactions by measuring their departures from the concordance model. In addition, we make a preliminary analysis of our model in light of the recently released data by SDSSIII on the measurement of the linear growth rate of structure.
Starobinsky cosmological model in Palatini formalism ; We classify singularities in FRW cosmologies, which dynamics can be reduced to the dynamical system of the Newtonian type. This classification is performed in terms of geometry of a potential function if it has poles. At the sewn singularity, which is of a finite scale factor type, the singularity in the past meets the singularity in the future. We show, that such singularities appear in the Starobinsky model in fhatRhatRgamma hatR2 in the Palatini formalism, when dynamics is determined by the corresponding piecewise smooth dynamical system. As an effect we obtain a degenerated singularity. Analytical calculations are given for the cosmological model with matter and the cosmological constant. The dynamics of model is also studied using dynamical system methods. From the phase portraits we find generic evolutionary scenarios of the evolution of the Universe. For this model, the best fit value of Omegagamma3gamma H02 is equal 9.70times 1011. We consider model in both Jordan and Einstein frames. We show that after transition to the Einstein frame we obtain both form of the potential of the scalar field and the decaying Lambda term.
Pseudospectral Maxwell solvers for an accurate modeling of Doppler harmonic generation on plasma mirrors with ParticleInCell codes ; With the advent of PW class lasers, the very large laser intensities attainable ontarget should enable the production of intense high order Doppler harmonics from relativistic laserplasma mirrors interactions. At present, the modeling of these harmonics with ParticleInCell PIC codes is extremely challenging as it implies an accurate description of tens of harmonic orders on a a broad range of angles. In particular, we show here that standard Finite Difference Time Domain FDTD Maxwell solvers used in most PIC codes partly fail to model Doppler harmonic generation because they induce numerical dispersion of electromagnetic waves in vacuum which is responsible for a spurious angular deviation of harmonic beams. This effect was extensively studied and a simple toymodel based on SnellDescartes law was developed that allows us to finely predict the angular deviation of harmonics depending on the spatiotemporal resolution and the Maxwell solver used in the simulations. Our model demonstrates that the mitigation of this numerical artifact with FDTD solvers mandates very high spatiotemporal resolution preventing doing realistic 3D simulations. We finally show that non dispersive pseudospectral analytical time domain solvers can considerably reduce the spatiotemporal resolution required to mitigate this spurious deviation and should enable in the the near future 3D accurate modeling on supercomputers in a realistic timetosolution.
INLA goes extreme Bayesian tail regression for the estimation of high spatiotemporal quantiles ; This work has been motivated by the challenge of the 2017 conference on ExtremeValue Analysis EVA2017, with the goal of predicting daily precipitation quantiles at the 99.8 level for each month at observed and unobserved locations. We here develop a Bayesian generalized additive modeling framework tailored to estimate complex trends in marginal extremes observed over space and time. Our approach is based on a set of regression equations linked to the exceedance probability above a high threshold and to the size of the excess, the latter being modeled using the generalized Pareto GP distribution suggested by ExtremeValue Theory. Latent random effects are modeled additively and semiparametrically using Gaussian process priors, which provides high flexibility and interpretability. Fast and accurate estimation of posterior distributions may be performed thanks to the Integrated Nested Laplace approximation INLA, efficiently implemented in the RINLA software, which we also use for determining a nonstationary threshold based on a model for the body of the distribution. We show that the GP distribution meets the theoretical requirements of INLA, and we then develop a penalized complexity prior specification for the tail index, which is a crucial parameter for extrapolating tail event probabilities. This prior concentrates mass close to a light exponential tail while allowing heavier tails by penalizing the distance to the exponential distribution. We illustrate this methodology through the modeling of spatial and seasonal trends in daily precipitation data provided by the EVA2017 challenge. Capitalizing on RINLA's fast computation capacities and large distributed computing resources, we conduct an extensive crossvalidation study to select model parameters governing the smoothness of trends. Our results outperform simple benchmarks and are comparable to the bestscoring approach.
The MeanField Approximation Information Inequalities, Algorithms, and Complexity ; The mean field approximation to the Ising model is a canonical variational tool that is used for analysis and inference in Ising models. We provide a simple and optimal bound for the KL error of the mean field approximation for Ising models on general graphs, and extend it to higher order Markov random fields. Our bound improves on previous bounds obtained in work in the graph limit literature by Borgs, Chayes, Lov'asz, S'os, and Vesztergombi and another recent work by Basak and Mukherjee. Our bound is tight up to lower order terms. Building on the methods used to prove the bound, along with techniques from combinatorics and optimization, we study the algorithmic problem of estimating the variational free energy for Ising models and general Markov random fields. For a graph G on n vertices and interaction matrix J with Frobenius norm J F, we provide algorithms that approximate the free energy within an additive error of epsilon n JF in time exppoly1epsilon. We also show that approximation within n JF1delta is NPhard for every delta 0. Finally, we provide more efficient approximation algorithms, which find the optimal mean field approximation, for ferromagnetic Ising models and for Ising models satisfying Dobrushin's condition.
Learning to Play with IntrinsicallyMotivated SelfAware Agents ; Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiositydriven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a worldmodel network that learns to predict the dynamic consequences of the agent's actions. Simultaneously, we train a separate explicit selfmodel that allows the agent to track the error map of its own worldmodel, and then uses the selfmodel to adversarially challenge the developing worldmodel. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including egomotion prediction, object attention, and object gathering. Moreover, the worldmodel that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that selfsupervise in complex novel physical environments.
Shadows of spherically symmetric black holes and naked singularities ; We compare shadows cast by Schwarzschild black holes with those produced by two classes of naked singularities that result from gravitational collapse of spherically symmetric matter. The latter models consist of an interior naked singularity spacetime restricted to radii rleq Rb, matched to Schwarzschild spacetime outside the boundary radius Rb. While a black hole always has a photon sphere and always casts a shadow, we find that the naked singularity models have photon spheres only if a certain parameter M0 that characterizes these models satisfies M0geq 23, or equivalently, if Rbleq 3M, where M is the total mass of the object. Such models do produce shadows. However, models with M023 or Rb3M have no photon sphere and do not produce a shadow. Instead, they produce an interesting fullmoon' image. These results imply that the presence of a shadow does not by itself prove that a compact object is necessarily a black hole. The object could be a naked singularity with M0geq 23, and we will need other observational clues to distinguish the two possibilities. On the other hand, the presence of a fullmoon image would certainly rule out a black hole and might suggest a naked singularity with M023. It would be worthwhile to generalize the present study, which is restricted to spherically symmetric models, to rotating black holes and naked singularities.
Solving Inverse Computational Imaging Problems using Deep Pixellevel Prior ; Signal reconstruction is a challenging aspect of computational imaging as it often involves solving illposed inverse problems. Recently, deep feedforward neural networks have led to stateoftheart results in solving various inverse imaging problems. However, being task specific, these networks have to be learned for each inverse problem. On the other hand, a more flexible approach would be to learn a deep generative model once and then use it as a signal prior for solving various inverse problems. We show that among the various state of the art deep generative models, autoregressive models are especially suitable for our purpose for the following reasons. First, they explicitly model the pixel level dependencies and hence are capable of reconstructing lowlevel details such as texture patterns and edges better. Second, they provide an explicit expression for the image prior which can then be used for MAP based inference along with the forward model. Third, they can model long range dependencies in images which make them ideal for handling global multiplexing as encountered in various compressive imaging systems. We demonstrate the efficacy of our proposed approach in solving three computational imaging problems Single Pixel Camera SPC, LiSens and FlatCam. For both real and simulated cases, we obtain better reconstructions than the stateoftheart methods in terms of perceptual and quantitative metrics.
An InformationPercolation Bound for Spin Synchronization on General Graphs ; This paper considers the problem of reconstructing n independent uniform spins X1,dots,Xn living on the vertices of an nvertex graph G, by observing their interactions on the edges of the graph. This captures instances of models such as i broadcasting on trees, ii block models, iii synchronization on grids, iv spiked Wigner models. The paper gives an upperbound on the mutual information between two vertices in terms of a bond percolation estimate. Namely, the information between two vertices' spins is bounded by the probability that these vertices are connected in a bond percolation model, where edges are opened with a probability that emulates the edgeinformation. Both the information and the openprobability are based on the Chisquared mutual information. The main results allow us to rederive known results for informationtheoretic nonreconstruction in models iiv, with more direct or improved bounds in some cases, and to obtain new results, such as for a spiked Wigner model on grids. The main result also implies a new subadditivity property for the Chisquared mutual information for symmetric channels and general graphs, extending the subadditivity property obtained by EvansKenyonPeresSchulman EKPS00 for trees.
Nonintrusive Subdomain PODTPWL Algorithm for Reservoir History Matching ; This paper presents a nonintrusive subdomain PODTPWL SD PODTPWL algorithm for reservoir data assimilation through integrating domain decomposition DD, radial basis function RBF interpolation and the trajectory piecewise linearization TPWL. It is an efficient approach for model reduction and linearization of general nonlinear timedependent dynamical systems without intruding the legacy source code. In the subdomain PODTPWL algorithm, firstly, a sequence of snapshots over the entire computational domain are saved and then partitioned into subdomains. From the local sequence of snapshots over each subdomain, a number of local basis vectors is formed using POD, and then the RBF interpolation is used to estimate the derivative matrices for each subdomain. Finally, those derivative matrices are substituted into a PODTPWL algorithm to form a reducedorder linear model in each subdomain. This reducedorder linear model makes the implementation of the adjoint easy and resulting in an efficient adjointbased parameter estimation procedure. The performance of the new adjointbased parameter estimation algorithm has been assessed through several synthetic cases. Comparisons with the classic finitedifference based history matching show that our proposed subdomain PODTPWL approach is obtaining comparable results. The number of fullorder model simulations required is roughly 23 times the number of uncertain parameters. Using different background parameter realizations, our approach efficiently generates an ensemble of calibrated models without additional fullorder model simulations.
CageNet Fracton Models ; We introduce a class of gapped threedimensional models, dubbed cagenet fracton models, which host immobile fracton excitations in addition to nonAbelian particles with restricted mobility. Starting from layers of twodimensional stringnet models, whose spectrum includes nonAbelian anyons, we condense extended onedimensional fluxstrings built out of pointlike excitations. Fluxstring condensation generalizes the concept of anyon condensation familiar from conventional topological order and allows us to establish properties of the fracton ordered equivalently, fluxstring condensed phase, such as its ground state wave function and spectrum of excitations. Through the examples of doubled Ising and SU2k cagenet models, we demonstrate the existence of strictly immobile Abelian fractons and of nonAbelian particles restricted to move only along one dimension. In the doubled Ising cagenet model, we show that these restrictedmobility nonAbelian excitations are a fundamentally threedimensional phenomenon, as they cannot be understood as bound states amongst twodimensional nonAbelian anyons and Abelian particles. We further show that the ground state wave function of such phases can be understood as a fluctuating network of extended objects cages and strings, which we dub a cagenet condensate. Besides having implications for topological quantum computation in three dimensions, our work may also point the way towards more general insights into quantum phases of matter with fracton order.
Using Clinical Narratives and Structured Data to Identify Distant Recurrences in Breast Cancer ; Accurately identifying distant recurrences in breast cancer from the Electronic Health Records EHR is important for both clinical care and secondary analysis. Although multiple applications have been developed for computational phenotyping in breast cancer, distant recurrence identification still relies heavily on manual chart review. In this study, we aim to develop a model that identifies distant recurrences in breast cancer using clinical narratives and structured data from EHR. We apply MetaMap to extract features from clinical narratives and also retrieve structured clinical data from EHR. Using these features, we train a support vector machine model to identify distant recurrences in breast cancer patients. We train the model using 1,396 doubleannotated subjects and validate the model using 599 doubleannotated subjects. In addition, we validate the model on a set of 4,904 singleannotated subjects as a generalization test. We obtained a high area under curve AUC score of 0.92 SD0.01 in the crossvalidation using the training dataset, then obtained AUC scores of 0.95 and 0.93 in the heldout test and generalization test using 599 and 4,904 samples respectively. Our model can accurately and efficiently identify distant recurrences in breast cancer by combining features extracted from unstructured clinical narratives and structured clinical data.
Selection of axial dipole from a seed magnetic field in rapidly rotating dynamo models ; In this study, we investigate preferences of dipolar magnetic structure from a seed magnetic field in the rapidly rotating spherical shell dynamo models. In this study, we set up a realistic model to show the effect of the Lorentz force in the polarity selection. The important results that has come out from our study is that the magnetic field acts on the flow much before the saturation. Our study suggests that the growth of the magnetic field is not a kinematic effect as one might think off, rather a dynamic effect. This dynamic effect grows as the field generated with time and finally brings the saturation to the dynamo action. Previous studies show that Lorentz force effect the flow when Elsasser number more or less 1 and the studies were focused on the saturation by looking at the timeaveraged quantities. However, in this study, we show a clear effect of the Lorentz force even at Elsasser number of 0.30.4. To show the effect of the Lorentz force, we did two different simulations, one is a nonlinear model and another is kinematic model and shows that how a magnetic field can change the flow structure and by doing that the generated field changes, while this kind of behavior is not observed in kinematic dynamo models. This study shows a scale dependent behaviour of the kinetic helicity at two different spectral range.
BaRC Backward Reachability Curriculum for Robotic Reinforcement Learning ; Modelfree Reinforcement Learning RL offers an attractive approach to learn control policies for highdimensional systems, but its relatively poor sample complexity often forces training in simulated environments. Even in simulation, goaldirected tasks whose natural reward function is sparse remain intractable for stateoftheart modelfree algorithms for continuous control. The bottleneck in these tasks is the prohibitive amount of exploration required to obtain a learning signal from the initial state of the system. In this work, we leverage physical priors in the form of an approximate system dynamics model to design a curriculum scheme for a modelfree policy optimization algorithm. Our Backward Reachability Curriculum BaRC begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamicallyconsistent manner once the policy optimization algorithm demonstrates sufficient performance. BaRC is general, in that it can accelerate training of any modelfree RL algorithm on a broad class of goaldirected continuous control MDPs. Its curriculum strategy is physically intuitive, easytotune, and allows incorporating physical priors to accelerate training without hindering the performance, flexibility, and applicability of the modelfree RL algorithm. We evaluate our approach on two representative dynamic robotic learning problems and find substantial performance improvement relative to previous curriculum generation techniques and naive exploration strategies.
Two Different Methods for Modelling the Likely Upper Economic Limit of the Future United Kingdom Wind Fleet ; Methods for predicting the likely upper economic limit for the wind fleet in the United Kingdom should be simple to use whilst being able to cope with evolving technologies, costs and grid management strategies. This paper present two such models, both of which use data on historical wind patterns but apply different approaches to estimating the extent of wind shedding as a function of the size of the wind fleet. It is clear from the models that as the wind fleet increases in size, wind shedding will progressively increase, and as a result the overall economic efficiency of the wind fleet will be reduced. The models provide almost identical predictions of the efficiency loss and suggest that the future upper economic limit of the wind fleet will be mainly determined by the wind fleet Headroom, a concept described in some detail in the paper. The results, which should have general applicability, are presented in graphical form, and should obviate the need for further modelling using the primary data. The paper also discusses the effectiveness of the wind fleet in decarbonising the grid, and the growing competition between wind and solar fleets as sources of electrical energy for the United Kingdom.
Gradient Sensing via Cell Communication ; Experimental evidence lends support to the conjecture that the ability of chains of cells to sense the gradient of an external chemical concentration could rely on celltocell communication. This is the basis for the gradient sensing nature of a specific model type of the Local Excitation, Global Inhibition LEGI principle, wherein the strength of the external chemical field is sensed through a comparison between a local exciting species and a global inhibitor that is shared via intracellular reactions in the cell chain. In this study we generalize the nearest neighbor communication mechanism in the abovementioned LEGI model in order to explore how the chemical sensing characteristics depend on the parameterization of the communication itself, cell size, and the radius of influence of neighboring cells. It was found that the radius of influence was less important than the approximating model for communication. Higher order approximations to the communication mechanism were better able to sense an external gradient. However, an analysis of the signal to noise ratio established that higher order models for communication were more prone to noise and thus have a lower signal to noise ratio. The generalization as well as the tools used in the analysis of the dynamics can be extended to more heterogeneous networks and can thus prove useful in using models and observations in the process of understanding chemical gradient via LEGI models with a communication component.
Logsumexp neural networks and posynomial models for convex and loglogconvex data ; We show in this paper that a onelayer feedforward neural network with exponential activation functions in the inner layer and logarithmic activation in the output neuron is an universal approximator of convex functions. Such a network represents a family of scaled logsum exponential functions, here named LSET. Under a suitable exponential transformation, the class of LSET functions maps to a family of generalized posynomials GPOST, which we similarly show to be universal approximators for loglogconvex functions. A key feature of an LSET network is that, once it is trained on data, the resulting model is convex in the variables, which makes it readily amenable to efficient design based on convex optimization. Similarly, once a GPOST model is trained on data, it yields a posynomial model that can be efficiently optimized with respect to its variables by using geometric programming GP. The proposed methodology is illustrated by two numerical examples, in which, first, models are constructed from simulation data of the two physical processes namely, the level of vibration in a vehicle suspension system, and the peak power generated by the combustion of propane, and then optimizationbased design is performed on these models.
Robust Tracking with Model Mismatch for Fast and Safe Planning an SOS Optimization Approach ; In the pursuit of realtime motion planning, a commonly adopted practice is to compute a trajectory by running a planning algorithm on a simplified, lowdimensional dynamical model, and then employ a feedback tracking controller that tracks such a trajectory by accounting for the full, highdimensional system dynamics. While this strategy of planning with model mismatch generally yields fast computation times, there are no guarantees of dynamic feasibility, which hampers application to safetycritical systems. Building upon recent work that addressed this problem through the lens of HamiltonJacobi HJ reachability, we devise an algorithmic framework whereby one computes, offline, for a pair of planner i.e., lowdimensional and tracking i.e., highdimensional models, a feedback tracking controller and associated tracking bound. This bound is then used as a safety margin when generating motion plans via the lowdimensional model. Specifically, we harness the computational tool of sumofsquares SOS programming to design a bilinear optimization algorithm for the computation of the feedback tracking controller and associated tracking bound. The algorithm is demonstrated via numerical experiments, with an emphasis on investigating the tradeoff between the increased computational scalability afforded by SOS and its intrinsic conservativeness. Collectively, our results enable scaling the appealing strategy of planning with model mismatch to systems that are beyond the reach of HJ analysis, while maintaining safety guarantees.
Wilson loop of the heterotic sigma model and the svmap ; The singlevalued projection sv is a relation between scattering amplitudes of gauge bosons in heterotic and open superstring theories. Recently we have studied sv from the aspect of nonlinear sigma models 1, where the gauge physics of open string sigma model is under the Wilson loop representation but the gauge physics of heterotic string sigma model is under the fermionic representation since the Wilson loop representation is absent in the heterotic case. There we showed that the sv comes from a sum of six radial orderings of heterotic vertices on the complex plane. In this paper, we propose a Wilson loop representation for the heterotic case and using the Wilson loop representation to show that sv comes from a sum of two oppositedirected contours of the heterotic sigma model. We firstly prove that the Wilson loop is the exact propagator of the fermion field that carry the gauge physics of the heterotic string in the fermionic representation. Then we construct the action of the heterotic string sigma model in terms of the Wilson loop, by exploring the geometry of the Wilson loop and by generalizing the nonabelian Stokes's theorem 2, 3, 4 to the fermionic case. After that, we compute some three loop and four loop diagrams as an example, to show how the sv for zeta2 and zeta3 arises from a sum of the contours of the Wilson loop. Finally we conjecture that this sum of contours of the Wilson loop is the mechanism behind the sv for general cases.
Bias Reduced Peaks over Threshold Tail Estimation ; In recent years several attempts have been made to extend tail modelling towards the modal part of the data. Frigessi et al. 2002 introduced dynamic mixtures of two components with a weight function pi pix smoothly connecting the bulk and the tail of the distribution. Recently, Naveau et al. 2016 reviewed this topic, and, continuing on the work by Papastathopoulos and Tawn 2013, proposed a statistical model which is in compliance with extreme value theory and allows for a smooth transition between the modal and tail part. Incorporating second order rates of convergence for distributions of peaks over thresholds POT, Beirlant et al. 2002, 2009 constructed models that can be viewed as special cases from both approaches discussed above. When fitting such second order models it turns out that the bias of the resulting extreme value estimators is significantly reduced compared to the classical tail fits using only the first order tail component based on the Pareto or generalized Pareto fits to peaks over threshold distributions. In this paper we provide novel bias reduced tail fitting techniques, improving upon the classical generalized Pareto GP approximation for POTs using the flexible semiparametric GP modelling introduced in Tencaliec et al. 2018. We also revisit and extend the secondorder refined POT approach started in Beirlant et al. 2009 to all maxdomains of attraction using flexible semiparametric modelling of the second order component. In this way we relax the classical second order regular variation assumptions.
Convex Clustering Model, Theoretical Guarantee and Efficient Algorithm ; Clustering is a fundamental problem in unsupervised learning. Popular methods like Kmeans, may suffer from poor performance as they are prone to get stuck in its local minima. Recently, the sumofnorms SON model also known as the clustering path has been proposed in Pelckmans et al. 2005, Lindsten et al. 2011 and Hocking et al. 2011. The perfect recovery properties of the convex clustering model with uniformly weighted all pairwisedifferences regularization have been proved by Zhu et al. 2014 and Panahi et al. 2017. However, no theoretical guarantee has been established for the general weighted convex clustering model, where better empirical results have been observed. In the numerical optimization aspect, although algorithms like the alternating direction method of multipliers ADMM and the alternating minimization algorithm AMA have been proposed to solve the convex clustering model Chi and Lange, 2015, it still remains very challenging to solve largescale problems. In this paper, we establish sufficient conditions for the perfect recovery guarantee of the general weighted convex clustering model, which include and improve existing theoretical results as special cases. In addition, we develop a semismooth Newton based augmented Lagrangian method for solving largescale convex clustering problems. Extensive numerical experiments on both simulated and real data demonstrate that our algorithm is highly efficient and robust for solving largescale problems. Moreover, the numerical results also show the superior performance and scalability of our algorithm comparing to the existing firstorder methods. In particular, our algorithm is able to solve a convex clustering problem with 200,000 points in mathbbR3 in about 6 minutes.
TheoreticalExperimental failure analysis of the cAl0.66Ti0.33NM2 steel system using nanoindentation instrumented and finite element analysis ; A theoreticalexperimental methodology for failure analysis of the cAl0.66Ti0.33N Interface M2 steel coating system is proposed here. This cAl0.66Ti0.33N coating was deposited by the arcPVD technique. For coating modeling the tractionseparation law and the extended finite element methodXFEM were applied, the cohesive zones model was used for interface modeling and the RambergOsgood law for substrate modeling. Experimental values using the instrumented nanoindentation technique, the scratch test and tensile stress test were obtained and introduced into the model. By means of nanoindentation the elastic modulus of coating, the fracture energy release rate and the nanohardness. Normal and shear stress values of the interface were obtained with the scratch test, at the adhesive and cohesive critical loads. Vickers indentation was used to generate cracking patterns in the cAl0.66Ti0.33N Interface M2 steel coating system. Radial and lateral cracks were generated and analyzed after transversal FIB cuts of the fracture zones. A finite element analysis was carried out to understand the relationship between the loaddisplacement curve and mechanical failure of in the system, associating the popin with nucleation, crack growth and cracking pattern. This works present a theoreticalexperimental methodology for failure analysis of hard coatings monolithic body allowing to calculate fracture toughness of the coating material and model cracking patterns caused by contact mechanics.
HyperProcess Model A ZeroShot Learning algorithm for Regression Problems based on Shape Analysis ; Zeroshot learning ZSL can be defined by correctly solving a task where no training data is available, based on previous acquired knowledge from different, but related tasks. So far, this area has mostly drawn the attention from computer vision community where a new unseen image needs to be correctly classified, assuming the target class was not used in the training procedure. Apart from image classification, only a couple of generic methods were proposed that are applicable to both classification and regression. These learn the relation among model coefficients so new ones can be predicted according to provided conditions. So far, up to our knowledge, no methods exist that are applicable only to regression, and take advantage from such setting. Therefore, the present work proposes a novel algorithm for regression problems that uses data drawn from trained models, instead of model coefficients. In this case, a shape analyses on the data is performed to create a statistical shape model and generate new shapes to train new models. The proposed algorithm is tested in a theoretical setting using the beta distribution where main problem to solve is to estimate a function that predicts curves, based on already learned different, but related ones.
Kepler Data Validation II Transit Model Fitting and MultiplePlanet Search ; This paper discusses the transit model fitting and multipleplanet search algorithms and performance of the Kepler Science Data Processing Pipeline, developed by the Kepler Science Operations Center SOC. Threshold Crossing Events TCEs, which are transit candidate events, are generated by the Transiting Planet Search TPS component of the pipeline and subsequently processed in the Data Validation DV component. The transit model is used in DV to fit TCEs in order to characterize planetary candidates and to derive parameters that are used in various diagnostic tests to classify them. After the signature associated with the TCE is removed from the light curve of the target star, the residual light curve goes through TPS again to search for additional TCEs. The iterative process of transit model fitting and multipleplanet search continues until no TCE is generated from the residual light curve or an upper limit is reached. The transit model fitting and multipleplanet search performance of the final release 9.3, January 2016 of the pipeline is demonstrated with the results of the processing of 4 years 17 quarters of flight data from the primary Kepler Mission. The transit model fitting results are accessible from the NASA Exoplanet Archive. The final version of the SOC codebase is available through GitHub.