text
stringlengths
62
2.94k
Combining Heterogeneous Spatial Datasets with Processbased Spatial Fusion Models A Unifying Framework ; In modern spatial statistics, the structure of data that is collected has become more heterogeneous. Depending on the type of spatial data, different modeling strategies for spatial data are used. For example, a kriging approach for geostatistical data; a Gaussian Markov random field model for lattice data; or a log Gaussian Cox process for pointpattern data. Despite these different modeling choices, the nature of underlying scientific datagenerating latent processes is often the same, which can be represented by some continuous spatial surfaces. In this paper, we introduce a unifying framework for processbased multivariate spatial fusion models. The framework can jointly analyze all three aforementioned types of spatial data or any combinations thereof. Moreover, the framework accommodates different conditional distributions for geostatistical and lattice data. We show that some established approaches, such as linear models of coregionalization, can be viewed as special cases of our proposed framework. We offer flexible and scalable implementations in R using Stan and INLA. Simulation studies confirm that the predictive performance of latent processes improves as we move from univariate spatial models to multivariate spatial fusion models. The introduced framework is illustrated using a crosssectional study linked with a national cohort dataset in Switzerland, we examine differences in underlying spatial risk patterns between respiratory disease and lung cancer.
Telephonetic Making Neural Language Models Robust to ASR and Semantic Noise ; Speech processing systems rely on robust feature extraction to handle phonetic and semantic variations found in natural language. While techniques exist for desensitizing features to common noise patterns produced by SpeechtoText STT and TexttoSpeech TTS systems, the question remains how to best leverage stateoftheart language models which capture rich semantic features, but are trained on only written text on inputs with ASR errors. In this paper, we present Telephonetic, a data augmentation framework that helps robustify language model features to ASR corrupted inputs. To capture phonetic alterations, we employ a characterlevel language model trained using probabilistic masking. Phonetic augmentations are generated in two stages a TTS encoder Tacotron 2, WaveGlow and a STT decoder DeepSpeech. Similarly, semantic perturbations are produced by sampling from nearby words in an embedding space, which is computed using the BERT language model. Words are selected for augmentation according to a hierarchical grammar sampling strategy. Telephonetic is evaluated on the Penn Treebank PTB corpus, and demonstrates its effectiveness as a bootstrapping technique for transferring neural language models to the speech domain. Notably, our language model achieves a test perplexity of 37.49 on PTB, which to our knowledge is stateoftheart among models trained only on PTB.
Extending Attack Graphs to Represent CyberAttacks in Communication Protocols and Modern IT Networks ; An attack graph is a method used to enumerate the possible paths that an attacker can execute in the organization network. MulVAL is a known opensource framework used to automatically generate attack graphs. MulVAL's default modeling has two main shortcomings. First, it lacks the representation of network protocol vulnerabilities, and thus it cannot be used to model common network attacks such as ARP poisoning, DNS spoofing, and SYN flooding. Second, it does not support advanced types of communication such as wireless and bus communication, and thus it cannot be used to model cyberattacks on networks that include IoT devices or industrial components. In this paper, we present an extended network security model for MulVAL that 1 considers the physical network topology, 2 supports shortrange communication protocols e.g., Bluetooth, 3 models vulnerabilities in the design of network protocols, and 4 models specific industrial communication architectures. Using the proposed extensions, we were able to model multiple attack techniques including spoofing, maninthemiddle, and denial of service, as well as attacks on advanced types of communication. We demonstrate the proposed model on a testbed implementing a simplified network architecture comprised of both IT and industrial components.
Modeling MultiVehicle Interaction Scenarios Using Gaussian Random Field ; Autonomous vehicles are expected to navigate in complex traffic scenarios with multiple surrounding vehicles. The correlations between road users vary over time, the degree of which, in theory, could be infinitely large, thus posing a great challenge in modeling and predicting the driving environment. In this paper, we propose a method to model multivehicle interactions using a stochastic vector field model and apply nonparametric Bayesian learning to extract the underlying motion patterns from a large quantity of naturalistic traffic data. We then use this model to reproduce the highdimensional driving scenarios in a finitely tractable form. We use a Gaussian process to model multivehicle motion, and a Dirichlet process to assign each observation to a specific scenario. We verify the effectiveness of the proposed method on highway and intersection datasets from the NGSIM project, in which complex multivehicle interactions are prevalent. The results show that the proposed method can capture motion patterns from both settings, without imposing heroic prior, and hence demonstrate the potential application for a wide array of traffic situations. The proposed modeling method could enable simulation platforms and other testing methods designed for autonomous vehicle evaluation, to easily model and generate traffic scenarios emulating large scale driving data.
DataDriven Distributionally Robust Appointment Scheduling over Wasserstein Balls ; We study a singleserver appointment scheduling problem with a fixed sequence of appointments, for which we must determine the arrival time for each appointment. We specifically examine two stochastic models. In the first model, we assume that all appointees show up at the scheduled arrival times yet their service durations are random. In the second model, we assume that appointees have random noshow behaviors and their service durations are random given that they show up at the appointments. In both models, we assume that the probability distribution of the uncertain parameters is unknown but can be partially observed via a set of historical data, which we view as independent samples drawn from the unknown distribution. In view of the distributional ambiguity, we propose a datadriven distributionally robust optimization DRO approach to determine an appointment schedule such that the worstcase i.e., maximum expectation of the system total cost is minimized. A key feature of this approach is that the optimal value and the set of optimal schedules thus obtained provably converge to those of the true model, i.e., the stochastic appointment scheduling model with regard to the true probability distribution of the uncertain parameters. While our DRO models are computationally intractable in general, we reformulate them to copositive programs, which are amenable to tractable semidefinite programming problems with highquality approximations. Furthermore, under some mild conditions, we recast these models as polynomialsized linear programs. Through an extensive numerical study, we demonstrate that our approach yields better outofsample performance than two stateoftheart methods.
BransDicke Scalar Field Cosmological Model in Lyra's Geometry ; In this paper, we have developed a new cosmological model in Einstein's modified gravity theory using two types of modification.i Geometrical modification, in which we have used Lyra's geometry in the left hand side of the Einstein field equations EFE and ii Modification in gravity energy momentum tensor on right hand side of EFE, as per BransDicke BD model. With these two modifications, we have investigated a spatially homogeneous and anisotropic Bianchi typeI cosmological models of Einstein's BransDicke theory of gravitation in Lyra geometry. The model represents accelerating universe at present and decelerating in past and is considered to be dominated by dark energy. Gauge function beta and BDscalar field phi are considered as a candidate for the dark energy and is responsible for the present acceleration. The derived model agrees at par with the recent supernovae SN Ia observations. We have set BDcoupling constant omega to be greater than 40000, seeing the solar system tests and evidences. We have discussed the various physical and geometrical properties of the models and have compared them with the corresponding relativistic models.
Realtime cosmology with SKA ; In this work, we investigate what role the redshift drift data of Square Kilometre Array SKA will play in the cosmological parameter estimation in the future. To test the constraint capability of the redshift drift data of SKAonly, the LambdaCDM model is chosen as a reference model. We find that using the SKA1 mock data, the LambdaCDM model can be loosely constrained, while the model can be well constrained when the SKA2 mock data are used. When the mock data of SKA are combined with the data of the European Extremely Large Telescope EELT, the constraints can be significantly improved almost as good as the data combination of the type Ia supernovae observation SN, the cosmic microwave background observation CMB, and the baryon acoustic oscillations observation BAO. Furthermore, we explore the impact of the redshift drift data of SKA on the basis of SNCMBBAOEELT in the LambdaCDM model, the wCDM model, the CPL model, and the HDE model. We find that the redshift drift measurement of SKA could help to significantly improve the constraints on dark energy and could break the degeneracy between the cosmological parameters. Therefore, we conclude that redshiftdrift observation of SKA would provide a good improvement in the cosmological parameter estimation in the future and has the enormous potential to be one of the most competitive cosmological probes in constraining dark energy.
Fusing Physicsbased and Deep Learning Models for Prognostics ; Physicsbased and datadriven models for remaining useful lifetime RUL prediction typically suffer from two major challenges that limit their applicability to complex realworld domains 1 incompleteness of physicsbased models and 2 limited representativeness of the training dataset for datadriven models. Combining the advantages of these two directions while overcoming some of their limitations, we propose a novel hybrid framework for fusing the information from physicsbased performance models with deep learning algorithms for prognostics of complex safetycritical systems under realworld scenarios. In the proposed framework, we use physicsbased performance models to infer unobservable model parameters related to a system's components health solving a calibration problem. These parameters are subsequently combined with sensor readings and used as input to a deep neural network to generate a datadriven prognostics model with physicsaugmented features. The performance of the hybrid framework is evaluated on an extensive case study comprising runtofailure degradation trajectories from a fleet of nine turbofan engines under real flight conditions. The experimental results show that the hybrid framework outperforms purely datadriven approaches by extending the prediction horizon by nearly 127. Furthermore, it requires less training data and is less sensitive to the limited representativeness of the dataset compared to purely datadriven approaches.
Segmentation of Satellite Imagery using UNet Models for Land Cover Classification ; The focus of this paper is using a convolutional machine learning model with a modified UNet structure for creating land cover classification mapping based on satellite imagery. The aim of the research is to train and test convolutional models for automatic land cover mapping and to assess their usability in increasing land cover mapping accuracy and change detection. To solve these tasks, authors prepared a dataset and trained machine learning models for land cover classification and semantic segmentation from satellite images. The results were analysed on three different land classification levels. BigEarthNet satellite image archive was selected for the research as one of two main datasets. This novel and recent dataset was published in 2019 and includes Sentinel2 satellite photos from 10 European countries made in 2017 and 2018. As a second dataset the authors composed an original set containing a Sentinel2 image and a CORINE land cover map of Estonia. The developed classification model shows a high overall Ftextsubscript1 score of 0.749 on multiclass land cover classification with 43 possible image labels. The model also highlights noisy data in the BigEarthNet dataset, where images seem to have incorrect labels. The segmentation models offer a solution for generating automatic land cover mappings based on Sentinel2 satellite images and show a high IoU score for land cover classes such as forests, inland waters and arable land. The models show a capability of increasing the accuracy of existing land classification maps and in land cover change detection.
Towards CRISPMLQ A Machine Learning Process Model with Quality Assurance Methodology ; Machine learning is an established and frequently used technique in industry and academia but a standard process model to improve success and efficiency of machine learning applications is still missing. Project organizations and machine learning practitioners have a need for guidance throughout the life cycle of a machine learning application to meet business expectations. We therefore propose a process model for the development of machine learning applications, that covers six phases from defining the scope to maintaining the deployed machine learning application. The first phase combines business and data understanding as data availability oftentimes affects the feasibility of the project. The sixth phase covers stateoftheart approaches for monitoring and maintenance of a machine learning applications, as the risk of model degradation in a changing environment is eminent. With each task of the process, we propose quality assurance methodology that is suitable to adress challenges in machine learning development that we identify in form of risks. The methodology is drawn from practical experience and scientific literature and has proven to be general and stable. The process model expands on CRISPDM, a data mining process model that enjoys strong industry support but lacks to address machine learning specific tasks. Our work proposes an industry and application neutral process model tailored for machine learning applications with focus on technical tasks for quality assurance.
BayesFlow Learning complex stochastic models with invertible neural networks ; Estimating the parameters of mathematical models is a common problem in almost all branches of science. However, this problem can prove notably difficult when processes and model descriptions become increasingly complex and an explicit likelihood function is not available. With this work, we propose a novel method for globally amortized Bayesian inference based on invertible neural networks which we call BayesFlow. The method uses simulation to learn a global estimator for the probabilistic mapping from observed data to underlying model parameters. A neural network pretrained in this way can then, without additional training or optimization, infer full posteriors on arbitrary many real datasets involving the same model family. In addition, our method incorporates a summary network trained to embed the observed data into maximally informative summary statistics. Learning summary statistics from data makes the method applicable to modeling scenarios where standard inference techniques with handcrafted summary statistics fail. We demonstrate the utility of BayesFlow on challenging intractable models from population dynamics, epidemiology, cognitive science and ecology. We argue that BayesFlow provides a general framework for building amortized Bayesian parameter estimation machines for any forward model from which data can be simulated.
Superdeterministic hiddenvariables models I nonequilibrium and signalling ; This is the first of two papers which attempt to comprehensively analyse superdeterministic hiddenvariables models of Bell correlations. We first give an overview of superdeterminism and discuss various criticisms of it raised in the literature. We argue that the most common criticism, the violation of freewill', is incorrect. We take up Bell's intuitive criticism that these models are conspiratorial'. To develop this further, we introduce nonequilibrium extensions of superdeterministic models. We show that the measurement statistics of these extended models depend on the physical system used to determine the measurement settings. This suggests a finetuning in order to eliminate this dependence from experimental observation. We also study the signalling properties of these extended models. We show that although they generally violate the formal nosignalling constraints, this violation cannot be equated to an actual signal. We therefore suggest that the socalled nosignalling constraints be more appropriately named the marginalindependence constraints. We discuss the mechanism by which marginalindependence is violated in superdeterministic models. Lastly, we consider a hypothetical scenario where two experimenters use the apparentsignalling of a superdeterministic model to communicate with each other. This scenario suggests another conspiratorial feature peculiar to superdeterminism. These suggestions are quantitatively developed in the second paper.
Removing BackdoorBased Watermarks in Neural Networks with Limited Data ; Deep neural networks have been widely applied and achieved great success in various fields. As training deep models usually consumes massive data and computational resources, trading the trained deep models is highly demanded and lucrative nowadays. Unfortunately, the naive trading schemes typically involves potential risks related to copyright and trustworthiness issues, e.g., a sold model can be illegally resold to others without further authorization to reap huge profits. To tackle this problem, various watermarking techniques are proposed to protect the model intellectual property, amongst which the backdoorbased watermarking is the most commonlyused one. However, the robustness of these watermarking approaches is not well evaluated under realistic settings, such as limited indistribution data availability and agnostic of watermarking patterns. In this paper, we benchmark the robustness of watermarking, and propose a novel backdoorbased watermark removal framework using limited data, dubbed WILD. The proposed WILD removes the watermarks of deep models with only a small portion of training data, and the output model can perform the same as models trained from scratch without watermarks injected. In particular, a novel data augmentation method is utilized to mimic the behavior of watermark triggers. Combining with the distribution alignment between the normal and perturbed e.g., occluded data in the feature space, our approach generalizes well on all typical types of trigger contents. The experimental results demonstrate that our approach can effectively remove the watermarks without compromising the deep model performance for the original task with the limited access to training data.
Universal Battery Performance and Degradation Model for Electric Aircraft ; Development of Urban Air Mobility UAM concepts has been primarily focused on electric vertical takeoff and landing aircraft eVTOLs, small aircraft which can land and takeoff vertically, and which are powered by rechargeable typically lithiumion batteries. Design, analysis, and operation of eVTOLs requires fast and accurate prediction of Liion battery performance throughout the lifetime of the battery. eVTOL battery performance modeling must be particularly accurate at high discharge rates to ensure accurate simulation of the high power takeoff and landing portions of the flight. In this work, we generate a battery performance and thermal behavior dataset specific to eVTOL duty cycles. We use this dataset to develop a battery performance and degradation model Cellfit which employs physicsinformed machine learning in the form of Universal Ordinary Differential Equations UODE's combined with an electrochemical cell model and degradation models which include solid electrolyte interphase SEI growth, lithium plating, and charge loss. We show that Cellfit with UODE's is better able to predict battery degradation than a mechanistic battery degradation model. We show that the improved accuracy of the degradation model improves the accuracy of the performance model. We believe that Cellfit will prove to be a valuable tool for eVTOL designers.
Improving the Accuracy of Global Forecasting Models using Time Series Data Augmentation ; Forecasting models that are trained across sets of many time series, known as Global Forecasting Models GFM, have shown recently promising results in forecasting competitions and realworld applications, outperforming many stateoftheart univariate forecasting techniques. In most cases, GFMs are implemented using deep neural networks, and in particular Recurrent Neural Networks RNN, which require a sufficient amount of time series to estimate their numerous model parameters. However, many time series databases have only a limited number of time series. In this study, we propose a novel, data augmentation based forecasting framework that is capable of improving the baseline accuracy of the GFM models in less dataabundant settings. We use three time series augmentation techniques GRATIS, moving block bootstrap MBB, and dynamic time warping barycentric averaging DBA to synthetically generate a collection of time series. The knowledge acquired from these augmented time series is then transferred to the original dataset using two different approaches the pooled approach and the transfer learning approach. When building GFMs, in the pooled approach, we train a model on the augmented time series alongside the original time series dataset, whereas in the transfer learning approach, we adapt a pretrained model to the new dataset. In our evaluation on competition and realworld time series datasets, our proposed variants can significantly improve the baseline accuracy of GFM models and outperform stateoftheart univariate forecasting methods.
Adaptable MultiDomain Language Model for Transformer ASR ; We propose an adapter based multidomain Transformer based language model LM for Transformer ASR. The model consists of a big size common LM and small size adapters. The model can perform multidomain adaptation with only the small size adapters and its related layers. The proposed model can reuse the full finetuned LM which is finetuned using all layers of an original model. The proposed LM can be expanded to new domains by adding about 2 of parameters for a first domain and 13 parameters for after second domain. The proposed model is also effective in reducing the model maintenance cost because it is possible to omit the costly and timeconsuming common LM pretraining process. Using proposed adapter based approach, we observed that a general LM with adapter can outperform a dedicated music domain LM in terms of word error rate WER.
Restructuring, Pruning, and Adjustment of Deep Models for Parallel Distributed Inference ; Using multiple nodes and parallel computing algorithms has become a principal tool to improve training and execution times of deep neural networks as well as effective collective intelligence in sensor networks. In this paper, we consider the parallel implementation of an alreadytrained deep model on multiple processing nodes a.k.a. workers where the deep model is divided into several parallel submodels, each of which is executed by a worker. Since latency due to synchronization and data transfer among workers negatively impacts the performance of the parallel implementation, it is desirable to have minimum interdependency among parallel submodels. To achieve this goal, we propose to rearrange the neurons in the neural network and partition them without changing the general topology of the neural network, such that the interdependency among submodels is minimized under the computations and communications constraints of the workers. We propose RePurpose, a layerwise model restructuring and pruning technique that guarantees the performance of the overall parallelized model. To efficiently apply RePurpose, we propose an approach based on ell0 optimization and the Munkres assignment algorithm. We show that, compared to the existing methods, RePurpose significantly improves the efficiency of the distributed inference via parallel implementation, both in terms of communication and computational complexity.
DECE Decision Explorer with Counterfactual Explanations for Machine Learning Models ; With machine learning models being increasingly applied to various decisionmaking scenarios, people have spent growing efforts to make machine learning models more transparent and explainable. Among various explanation techniques, counterfactual explanations have the advantages of being humanfriendly and actionable a counterfactual explanation tells the user how to gain the desired prediction with minimal changes to the input. Besides, counterfactual explanations can also serve as efficient probes to the models' decisions. In this work, we exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models. We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets, supporting users ranging from decisionsubjects to model developers. DECE supports exploratory analysis of model decisions by combining the strengths of counterfactual explanations at instance and subgrouplevels. We also introduce a set of interactions that enable users to customize the generation of counterfactual explanations to find more actionable ones that can suit their needs. Through three use cases and an expert interview, we demonstrate the effectiveness of DECE in supporting decision exploration tasks and instance explanations.
Adversarial Eigen Attack on BlackBox Models ; Blackbox adversarial attack has attracted a lot of research interests for its practical use in AI safety. Compared with the whitebox attack, a blackbox setting is more difficult for less available information related to the attacked model and the additional constraint on the query budget. A general way to improve the attack efficiency is to draw support from a pretrained transferable whitebox model. In this paper, we propose a novel setting of transferable blackbox attack attackers may use external information from a pretrained model with available network parameters, however, different from previous studies, no additional training data is permitted to further change or tune the pretrained model. To this end, we further propose a new algorithm, EigenBA to tackle this problem. Our method aims to explore more gradient information of the blackbox model, and promote the attack efficiency, while keeping the perturbation to the original attacked image small, by leveraging the Jacobian matrix of the pretrained whitebox model. We show the optimal perturbations are closely related to the right singular vectors of the Jacobian matrix. Further experiments on ImageNet and CIFAR10 show that even the unlearnable pretrained whitebox model could also significantly boost the efficiency of the blackbox attack and our proposed method could further improve the attack efficiency.
Headtonerve analysis of electromechanical impairments of diffuse axonal injury ; The aim was to investigate mechanical and functional failure of diffuse axonal injury DAI in nerve bundles following frontal head impacts, by finite element simulations. Anatomical changes following traumatic brain injury are simulated at the macroscale by using a 3D head model. Frontal head impacts at speeds of 2.57.5 ms induce mildtomoderate DAI in the white matter of the brain. Investigation of the changes in induced electromechanical responses at the cellular level is carried out in two scaled nerve bundle models, one with myelinated nerve fibres, the other with unmyelinated nerve fibres. DAI occurrence is simulated by using a realtime fully coupled electromechanical framework, which combines a modulated threshold for spiking activation and independent alteration of the electrical properties for each threelayer fibre in the nerve bundle models. The magnitudes of simulated strains in the white matter of the brain model are used to determine the displacement boundary conditions in elongation simulations using the 3D nerve bundle models. At high impact speed, mechanical failure occurs at lower strain values in large unmyelinated bundles than in myelinated bundles or small unmyelinated bundles; signal propagation continues in large myelinated bundles during and after loading, although there is a large shift in baseline voltage during loading; a linear relationship is observed between the generated plastic strain in the nerve bundle models and the impact speed and nominal strains of the head model. The myelin layer protects the fibre from mechanical damage, preserving its functionalities.
Classification of Diabetic Retinopathy Using Unlabeled Data and Knowledge Distillation ; Knowledge distillation allows transferring knowledge from a pretrained model to another. However, it suffers from limitations, and constraints related to the two models need to be architecturally similar. Knowledge distillation addresses some of the shortcomings associated with transfer learning by generalizing a complex model to a lighter model. However, some parts of the knowledge may not be distilled by knowledge distillation sufficiently. In this paper, a novel knowledge distillation approach using transfer learning is proposed. The proposed method transfers the entire knowledge of a model to a new smaller one. To accomplish this, unlabeled data are used in an unsupervised manner to transfer the maximum amount of knowledge to the new slimmer model. The proposed method can be beneficial in medical image analysis, where labeled data are typically scarce. The proposed approach is evaluated in the context of classification of images for diagnosing Diabetic Retinopathy on two publicly available datasets, including Messidor and EyePACS. Simulation results demonstrate that the approach is effective in transferring knowledge from a complex model to a lighter one. Furthermore, experimental results illustrate that the performance of different small models is improved significantly using unlabeled data and knowledge distillation.
Benchmarking offtheshelf statistical shape modeling tools in clinical applications ; Statistical shape modeling SSM is widely used in biology and medicine as a new generation of morphometric approaches for the quantitative analysis of anatomical shapes. Technological advancements of in vivo imaging have led to the development of opensource computational tools that automate the modeling of anatomical shapes and their populationlevel variability. However, little work has been done on the evaluation and validation of such tools in clinical applications that rely on morphometric quantifications e.g., implant design and lesion screening. Here, we systematically assess the outcome of widely used, stateoftheart SSM tools, namely ShapeWorks, Deformetrica, and SPHARMPDM. We use both quantitative and qualitative metrics to evaluate shape models from different tools. We propose validation frameworks for anatomical landmarkmeasurement inference and lesion screening. We also present a lesion screening method to objectively characterize subtle abnormal shape changes with respect to learned populationlevel statistics of controls. Results demonstrate that SSM tools display different levels of consistencies, where ShapeWorks and Deformetrica models are more consistent compared to models from SPHARMPDM due to the groupwise approach of estimating surface correspondences. Furthermore, ShapeWorks and Deformetrica shape models are found to capture clinically relevant populationlevel variability compared to SPHARMPDM models.
Machine Intelligence for Outcome Predictions of Trauma Patients During Emergency Department Care ; Trauma mortality results from a multitude of nonlinear dependent risk factors including patient demographics, injury characteristics, medical care provided, and characteristics of medical facilities; yet traditional approach attempted to capture these relationships using rigid regression models. We hypothesized that a transfer learning based machine learning algorithm could deeply understand a trauma patient's condition and accurately identify individuals at high risk for mortality without relying on restrictive regression model criteria. Anonymous patient visit data were obtained from years 20072014 of the National Trauma Data Bank. Patients with incomplete vitals, unknown outcome, or missing demographics data were excluded. All patient visits occurred in U.S. hospitals, and of the 2,007,485 encounters that were retrospectively examined, 8,198 resulted in mortality 0.4. The machine intelligence model was evaluated on its sensitivity, specificity, positive and negative predictive value, and Matthews Correlation Coefficient. Our model achieved similar performance in agespecific comparison models and generalized well when applied to all ages simultaneously. While testing for confounding factors, we discovered that excluding fallrelated injuries boosted performance for adult trauma patients; however, it reduced performance for children. The machine intelligence model described here demonstrates similar performance to contemporary machine intelligence models without requiring restrictive regression model criteria or extensive medical expertise.
Sketch2CAD Sequential CAD Modeling by Sketching in Context ; We present a sketchbased CAD modeling system, where users create objects incrementally by sketching the desired shape edits, which our system automatically translates to CAD operations. Our approach is motivated by the close similarities between the steps industrial designers follow to draw 3D shapes, and the operations CAD modeling systems offer to create similar shapes. To overcome the strong ambiguity with parsing 2D sketches, we observe that in a sketching sequence, each step makes sense and can be interpreted in the emphcontext of what has been drawn before. In our system, this context corresponds to a partial CAD model, inferred in the previous steps, which we feed along with the input sketch to a deep neural network in charge of interpreting how the model should be modified by that sketch. Our deep network architecture then recognizes the intended CAD operation and segments the sketch accordingly, such that a subsequent optimization estimates the parameters of the operation that best fit the segmented sketch strokes. Since there exists no datasets of paired sketching and CAD modeling sequences, we train our system by generating synthetic sequences of CAD operations that we render as line drawings. We present a proof of concept realization of our algorithm supporting four frequently used CAD operations. Using our system, participants are able to quickly model a large and diverse set of objects, demonstrating Sketch2CAD to be an alternate way of interacting with current CAD modeling systems.
Necessary and sufficient condition for hysteresis in the mathematical model of the cell type regulation of Bacillus subtilis ; The key to a robust life system is to ensure that each cell population is maintained in an appropriate state. In this work, a mathematical model was used to investigate the control of the switching between the migrating and nonmigrating states of the Bacillus subtilis cell population. In this case, the motile cells and matrix producers were the predominant cell types in the migrating cell population and nonmigrating state, respectively, and could be suitably controlled according to the environmental conditions and cell density information. A minimal smooth model consisting of four ordinary differential equations was used as the mathematical model to control the B. subtilis cell types. Furthermore, the necessary and sufficient conditions for the hysteresis, which pertains to the change in the pheromone concentration, were clarified. In general, the hysteretic control of the cell state enables stable switching between the migrating and growth states of the B. subtilis cell population, thereby facilitating the biofilm life cycle. The results of corresponding culture experiments were examined, and the obtained corollaries were used to develop a model to input environmental conditions, especially, the external pH. On this basis, the environmental conditions were incorporated in a simulation model for the cell type control. In combination with a mathematical model of the cell population dynamics, a prediction model for colony growth involving multiple cell states, including concentric circular colonies of B. subtilis, could be established.
Data driven valueatrisk forecasting using a SVRGARCHKDE hybrid ; Appropriate risk management is crucial to ensure the competitiveness of financial institutions and the stability of the economy. One widely used financial risk measure is ValueatRisk VaR. VaR estimates based on linear and parametric models can lead to biased results or even underestimation of risk due to time varying volatility, skewness and leptokurtosis of financial return series. The paper proposes a nonlinear and nonparametric framework to forecast VaR that is motivated by overcoming the disadvantages of parametric models with a purely data driven approach. Mean and volatility are modeled via support vector regression SVR where the volatility model is motivated by the standard generalized autoregressive conditional heteroscedasticity GARCH formulation. Based on this, VaR is derived by applying kernel density estimation KDE. This approach allows for flexible tail shapes of the profit and loss distribution, adapts for a wide class of tail events and is able to capture complex structures regarding mean and volatility. The SVRGARCHKDE hybrid is compared to standard, exponential and threshold GARCH models coupled with different error distributions. To examine the performance in different markets, onedayahead and tendaysahead forecasts are produced for different financial indices. Model evaluation using a likelihood ratio based test framework for interval forecasts and a test for superior predictive ability indicates that the SVRGARCHKDE hybrid performs competitive to benchmark models and reduces potential losses especially for tendaysahead forecasts significantly. Especially models that are coupled with a normal distribution are systematically outperformed.
Parameter inference for a stochastic kinetic model of expanded polyglutamine proteins ; The presence of protein aggregates in cells is a known feature of many human agerelated diseases, such as Huntington's disease. Simulations using fixed parameter values in a model of the dynamic evolution of expanded polyglutamine PolyQ proteins in cells have been used to gain a better understanding of the biological system, how to focus drug development and how to construct more efficient designs of future laboratorybased in vitro experiments. However, there is considerable uncertainty about the values of some of the parameters governing the system. Currently, appropriate values are chosen by ad hoc attempts to tune the parameters so that the model output matches experimental data. The problem is further complicated by the fact that the data only offer a partial insight into the underlying biological process the data consist only of the proportions of cell death and of cells with inclusion bodies at a few time points, corrupted by measurement error. Developing inference procedures to estimate the model parameters in this scenario is a significant task. The model probabilities corresponding to the observed proportions cannot be evaluated exactly and so they are estimated within the inference algorithm by repeatedly simulating realisations from the model. In general such an approach is computationally very expensive and we therefore construct Gaussian process emulators for the key quantities and reformulate our algorithm around these fast stochastic approximations. We conclude by examining the fit of our model and highlight appropriate values of the model parameters leading to new insights into the underlying biological processes such as the kinetics of aggregation.
Supervised Learning with Projected Entangled Pair States ; Tensor networks, a model that originated from quantum physics, has been gradually generalized as efficient models in machine learning in recent years. However, in order to achieve exact contraction, only treelike tensor networks such as the matrix product states and tree tensor networks have been considered, even for modeling twodimensional data such as images. In this work, we construct supervised learning models for images using the projected entangled pair states PEPS, a twodimensional tensor network having a similar structure prior to natural images. Our approach first performs a feature map, which transforms the image data to a product state on a grid, then contracts the product state to a PEPS with trainable parameters to predict image labels. The tensor elements of PEPS are trained by minimizing differences between training labels and predicted labels. The proposed model is evaluated on image classifications using the MNIST and the FashionMNIST datasets. We show that our model is significantly superior to existing models using treelike tensor networks. Moreover, using the same input features, our method performs as well as the multilayer perceptron classifier, but with much fewer parameters and is more stable. Our results shed light on potential applications of twodimensional tensor network models in machine learning.
Subgroup identification in individual patient data metaanalysis using modelbased recursive partitioning ; Modelbased recursive partitioning MOB can be used to identify subgroups with differing treatment effects. The detection rate of treatmentbycovariate interactions and the accuracy of identified subgroups using MOB depend strongly on the sample size. Using data from multiple randomized controlled clinical trials can overcome the problem of too small samples. However, naively pooling data from multiple trials may result in the identification of spurious subgroups as differences in study design, subject selection and other sources of betweentrial heterogeneity are ignored. In order to account for betweentrial heterogeneity in individual participant data IPD metaanalysis randomeffect models are frequently used. Commonly, heterogeneity in the treatment effect is modelled using random effects whereas heterogeneity in the baseline risks is modelled by either fixed effects or random effects. In this article, we propose metaMOB, a procedure using the generalized mixedeffects model tree GLMM tree algorithm for subgroup identification in IPD metaanalysis. Although the application of metaMOB is potentially wider, e.g. randomized experiments with participants in social sciences or preclinical experiments in life sciences, we focus on randomized controlled clinical trials. In a simulation study, metaMOB outperformed GLMM trees assuming a random intercept only and modelbased recursive partitioning MOB, whose algorithm is the basis for GLMM trees, with respect to the false discovery rates, accuracy of identified subgroups and accuracy of estimated treatment effect. The most robust and therefore most promising method is metaMOB with fixed effects for modelling the betweentrial heterogeneity in the baseline risks.
Bidirectional Representation Learning from Transformers using Multimodal Electronic Health Record Data to Predict Depression ; Advancements in machine learning algorithms have had a beneficial impact on representation learning, classification, and prediction models built using electronic health record EHR data. Effort has been put both on increasing models' overall performance as well as improving their interpretability, particularly regarding the decisionmaking process. In this study, we present a temporal deep learning model to perform bidirectional representation learning on EHR sequences with a transformer architecture to predict future diagnosis of depression. This model is able to aggregate five heterogenous and highdimensional data sources from the EHR and process them in a temporal manner for chronic disease prediction at various prediction windows. We applied the current trend of pretraining and finetuning on EHR data to outperform the current stateoftheart in chronic disease prediction, and to demonstrate the underlying relation between EHR codes in the sequence. The model generated the highest increases of precisionrecall area under the curve PRAUC from 0.70 to 0.76 in depression prediction compared to the best baseline model. Furthermore, the selfattention weights in each sequence quantitatively demonstrated the inner relationship between various codes, which improved the model's interpretability. These results demonstrate the model's ability to utilize heterogeneous EHR data to predict depression while achieving high accuracy and interpretability, which may facilitate constructing clinical decision support systems in the future for chronic disease screening and early detection.
Robust Hypothesis Testing and Model Selection for Parametric Proportional Hazard Regression Models ; The semiparametric Cox proportional hazards regression model has been widely used for many years in several applied sciences. However, a fully parametric proportional hazards model, if appropriately assumed, can often lead to more efficient inference. To tackle the extreme nonrobustness of the traditional maximum likelihood estimator in the presence of outliers in the data under such fully parametric proportional hazard models, a robust estimation procedure has recently been proposed extending the concept of the minimum density power divergence estimator MDPDE under this setup. In this paper, we consider the problem of statistical inference under the parametric proportional hazards model and develop robust Waldtype hypothesis testing and model selection procedures using the MDPDEs. We have also derived the necessary asymptotic results which are used to construct the testing procedure for general composite hypothesis and study its asymptotic powers. The claimed robustness properties are studied theoretically via appropriate influence function analyses. We have studied the finite sample level and power of the proposed MDPDE based Wald type test through extensive simulations where comparisons are also made with the existing semiparametric methods. The important issue of the selection of appropriate robustness tuning parameter is also discussed. The practical usefulness of the proposed robust testing and model selection procedures is finally illustrated through three interesting real data examples.
Modeling Financial Products and their Supply Chains ; The objective of this paper is to explore how financial big data and machine learning methods can be applied to model and understand financial products. We focus on residential mortgage backed securities, resMBS, which were at the heart of the 2008 US financial crisis. These securities are contained within a prospectus and have a complex waterfall payoff structure. Multiple financial institutions form a supply chain to create prospectuses. To model this supply chain, we use unsupervised probabilistic methods, particularly dynamic topics models DTM, to extract a set of features topics reflecting community formation and temporal evolution along the chain. We then provide insight into the performance of the resMBS securities and the impact of the supply chain through a series of increasingly comprehensive models. First, models at the security level directly identify salient features of resMBS securities that impact their performance. We then extend the model to include prospectus level features and demonstrate that the composition of the prospectus is significant. Our model also shows that communities along the supply chain that are associated with the generation of the prospectuses and securities have an impact on performance. We are the first to show that toxic communities that are closely linked to financial institutions that played a key role in the subprime crisis can increase the risk of failure of resMBS securities.
Keep it Simple Dataefficient Learning for Controlling Complex Systems with Simple Models ; When manipulating a novel object with complex dynamics, a state representation is not always available, for example for deformable objects. Learning both a representation and dynamics from observations requires large amounts of data. We propose Learned Visual Similarity Predictive Control LVSPC, a novel method for dataefficient learning to control systems with complex dynamics and highdimensional state spaces from images. LVSPC leverages a given simple model approximation from which image observations can be generated. We use these images to train a perception model that estimates the simple model state from observations of the complex system online. We then use data from the complex system to fit the parameters of the simple model and learn where this model is inaccurate, also online. Finally, we use Model Predictive Control and bias the controller away from regions where the simple model is inaccurate and thus where the controller is less reliable. We evaluate LVSPC on two tasks; manipulating a tethered mass and a rope. We find that our method performs comparably to stateoftheart reinforcement learning methods with an order of magnitude less data. LVSPC also completes the rope manipulation task on a real robot with 80 success rate after only 10 trials, despite using a perception system trained only on images from simulation.
Digital twins based on bidirectional LSTM and GAN for modelling the COVID19 pandemic ; The outbreak of the coronavirus disease 2019 COVID19 has now spread throughout the globe infecting over 150 million people and causing the death of over 3.2 million people. Thus, there is an urgent need to study the dynamics of epidemiological models to gain a better understanding of how such diseases spread. While epidemiological models can be computationally expensive, recent advances in machine learning techniques have given rise to neural networks with the ability to learn and predict complex dynamics at reduced computational costs. Here we introduce two digital twins of a SEIRS model applied to an idealised town. The SEIRS model has been modified to take account of spatial variation and, where possible, the model parameters are based on official virus spreading data from the UK. We compare predictions from a datacorrected Bidirectional Long ShortTerm Memory network and a predictive Generative Adversarial Network. The predictions given by these two frameworks are accurate when compared to the original SEIRS model data. Additionally, these frameworks are dataagnostic and could be applied to towns, idealised or real, in the UK or in other countries. Also, more compartments could be included in the SEIRS model, in order to study more realistic epidemiological behaviour.
Masking Primal and Dual Models for Data Privacy in Network Revenue Management ; We study a collaborative revenue management problem where multiple decentralized parties agree to share some of their capacities. This collaboration is performed by constructing a large mathematical programming model available to all parties. The parties then use the solution of this model in their own capacity control systems. In this setting, however, the major concern for the parties is the privacy of their input data along with their individual optimal solutions. We first reformulate a general linear programming model that can be used for a widerange of network revenue management problems. Then, we address the dataprivacy concern of the reformulated model and propose an approach based on solving an equivalent dataprivate model constructed with input masking via random transformations. Our main result shows that after solving the dataprivate model, each party can safely access only its own optimal capacity control decisions. We also discuss the security of the transformed problem in the considered multiparty setting. We conduct simulation experiments to support our results and evaluate the computational efficiency of the proposed dataprivate model. Our work provides an analytical approach and insights on how to manage shared resources in a network problem while ensuring data privacy. Constructing and solving the collaborative network problem requires information exchange between parties which may not be possible in practice. Including dataprivacy in decentralized collaborative network revenue management problems with capacity sharing is new to the literature and relevant to practice.
Efficient solvers for shallowwater SaintVenant equations and debris transportationdeposition models ; This research is aimed at achieving an efficient digital infrastructure for evaluating risks and damages caused by tsunami flooding. It is mainly focused on the suitable modeling of debris dynamics for a simple but accurate enough assessment of damages. For different reasons including computational performance and Big Data management issues, we focus our research on Eulerian debris flow modeling. Rather than using complex multiphase debris models, we rather use an empirical transportation and deposition model that takes into account the interaction with the main water flow, frictioncontact with the ground but also debris interaction. In particular, for debris interaction, we have used ideas coming from vehicular traffic flow modeling. We introduce a velocity regularization term similar to the socalled anticipation term'' in traffic flow modeling that takes into account the local flow between neighboring debris and makes the problem mathematically wellposed. It prevents from the generation of Dirac measures of debris'' at shock waves. As a result, the model is able to capture emerging phenomenons like debris aggregation and accumulations, and possibly to react on the main flow by creating hills of debris and make the main stream deviate. We also discuss the way to derive quantities of interest QoI, especially damage functions'' from the debris density and momentum fields. We believe that this original unexplored debris approach can lead to a valuable analysis of tsunami flooding damage assessment with Physicsbased damage functions. Numerical experiments show the nice behaviour of the numerical solvers, including the solution of SaintVenant's shallow water equations and debris dynamics equations.
Learning to Fairly Classify the Quality of Wireless Links ; Machine learning ML has been used to develop increasingly accurate link quality estimators for wireless networks. However, more indepth questions regarding the most suitable class of models, most suitable metrics and model performance on imbalanced datasets remain open. In this paper, we propose a new treebased link quality classifier that meets high performance and fairly classifies the minority class and, at the same time, incurs low training cost. We compare the treebased model, to a multilayer perceptron MLP nonlinear model and two linear models, namely logistic regression LR and SVM, on a selected imbalanced dataset and evaluate their results using five different performance metrics. Our study shows that 1 nonlinear models perform slightly better than linear models in general, 2 the proposed nonlinear treebased model yields the best performance tradeoff considering F1, training time and fairness, 3 single metric aggregated evaluations based only on accuracy can hide poor, unfair performance especially on minority classes, and 4 it is possible to improve the performance on minority classes, by over 40 through feature selection and by over 20 through resampling, therefore leading to fairer classification results.
Testing the CDM paradigm with growth rate data and machine learning ; The cosmological constant Lambda and cold dark matter CDM model LambdatextCDM is one of the pillars of modern cosmology and is widely used as the de facto theoretical model by current and forthcoming surveys. As the nature of dark energy is very elusive, in order to avoid the problem of model bias, here we present a novel null test at the perturbation level that uses the growth of matter perturbation data in order to assess the concordance model. We analyze how accurate this null test can be reconstructed by using data from forthcoming surveys creating mock catalogs based on LambdatextCDM and three models that display a different evolution of the matter perturbations, namely a dark energy model with constant equation of state w wCDM, the Hu Sawicki and designer fR models, and we reconstruct them with a machine learning technique known as the Genetic Algorithms. We show that with future LSSTlike mock data our consistency test will be able to rule out these viable cosmological models at more than 5sigma, help to check for tensions in the data and alleviate the existing tension of the amplitude of matter fluctuations S8sigma8leftOmegam0.3right0.5.
Gaussian Process Subspace Regression for Model Reduction ; Subspacevalued functions arise in a wide range of problems, including parametric reduced order modeling PROM. In PROM, each parameter point can be associated with a subspace, which is used for PetrovGalerkin projections of large system matrices. Previous efforts to approximate such functions use interpolations on manifolds, which can be inaccurate and slow. To tackle this, we propose a novel Bayesian nonparametric model for subspace prediction the Gaussian Process Subspace regression GPS model. This method is extrinsic and intrinsic at the same time with multivariate Gaussian distributions on the Euclidean space, it induces a joint probability model on the Grassmann manifold, the set of fixeddimensional subspaces. The GPS adopts a simple yet general correlation structure, and a principled approach for model selection. Its predictive distribution admits an analytical form, which allows for efficient subspace prediction over the parameter space. For PROM, the GPS provides a probabilistic prediction at a new parameter point that retains the accuracy of local reduced models, at a computational complexity that does not depend on system dimension, and thus is suitable for online computation. We give four numerical examples to compare our method to subspace interpolation, as well as two methods that interpolate local reduced models. Overall, GPS is the most data efficient, more computationally efficient than subspace interpolation, and gives smooth predictions with uncertainty quantification.
Modelling Neuronal Behaviour with Time Series Regression Recurrent Neural Networks on C. Elegans Data ; Given the inner complexity of the human nervous system, insight into the dynamics of brain activity can be gained from understanding smaller and simpler organisms, such as the nematode C. Elegans. The behavioural and structural biology of these organisms is wellknown, making them prime candidates for benchmarking modelling and simulation techniques. In these complex neuronal collections, classical, whitebox modelling techniques based on intrinsic structural or behavioural information are either unable to capture the profound nonlinearities of the neuronal response to different stimuli or generate extremely complex models, which are computationally intractable. In this paper we show how the nervous system of C. Elegans can be modelled and simulated with datadriven models using different neural network architectures. Specifically, we target the use of state of the art recurrent neural networks architectures such as LSTMs and GRUs and compare these architectures in terms of their properties and their accuracy as well as the complexity of the resulting models. We show that GRU models with a hidden layer size of 4 units are able to accurately reproduce with high accuracy the system's response to very different stimuli.
Sceneadaptive Knowledge Distillation for Sequential Recommendation via Differentiable Architecture Search ; Sequential recommender systems SRS have become a research hotspot due to its power in modeling user dynamic interests and sequential behavioral patterns. To maximize model expressive ability, a default choice is to apply a larger and deeper network architecture, which, however, often brings high network latency when generating online recommendations. Naturally, we argue that compressing the heavy recommendation models into middle or light weight neural networks is of great importance for practical production systems. To realize such a goal, we propose AdaRec, a knowledge distillation KD framework which compresses knowledge of a teacher model into a student model adaptively according to its recommendation scene by using differentiable Neural Architecture Search NAS. Specifically, we introduce a targetoriented distillation loss to guide the structure search process for finding the student network architecture, and a costsensitive loss as constraints for model size, which achieves a superior tradeoff between recommendation effectiveness and efficiency. In addition, we leverage Earth Mover's Distance EMD to realize manytomany layer mapping during knowledge distillation, which enables each intermediate student layer to learn from other intermediate teacher layers adaptively. Extensive experiments on realworld recommendation datasets demonstrate that our model achieves competitive or better accuracy with notable inference speedup comparing to strong counterparts, while discovering diverse neural architectures for sequential recommender models under different recommendation scenes.
MultiChannel AutoEncoders and a Novel Dataset for Learning Domain Invariant Representations of Histopathology Images ; Domain shift is a problem commonly encountered when developing automated histopathology pipelines. The performance of machine learning models such as convolutional neural networks within automated histopathology pipelines is often diminished when applying them to novel data domains due to factors arising from differing staining and scanning protocols. The DualChannel AutoEncoder DCAE model was previously shown to produce feature representations that are less sensitive to appearance variation introduced by different digital slide scanners. In this work, the MultiChannel AutoEncoder MCAE model is presented as an extension to DCAE which learns from more than two domains of data. Additionally, a synthetic dataset is generated using CycleGANs that contains aligned tissue images that have had their appearance synthetically modified. Experimental results show that the MCAE model produces feature representations that are less sensitive to interdomain variations than the comparative StaNoSA method when tested on the novel synthetic data. Additionally, the MCAE and StaNoSA models are tested on a novel tissue classification task. The results of this experiment show the MCAE model out performs the StaNoSA model by 5 percentagepoints in the f1score. These results show that the MCAE model is able to generalise better to novel data and tasks than existing approaches by actively learning normalised feature representations.
Parameter identification for a damage model using a physics informed neural network ; This work applies concepts of artificial neural networks to identify the parameters of a mathematical model based on phase fields for damage and fracture. Damage mechanics is the part of the continuum mechanics that models the effects of the microdefect formation using state variables at the macroscopic level. The equations that define the model are derived from fundamental laws of physics and provide important relationships between state variables. Simulations using the model considered in this work produce good qualitative and quantitative results, but many parameters must be adjusted to reproduce a certain material behavior. The identification of model parameters is considered by solving an inverse problem that uses pseudoexperimental data to find the values that produce the best fit to the data. We apply a physicsinformed neural network and combine some classical estimation methods to identify the material parameters that appear in the damage equation of the model. Our strategy consists of a neural network that acts as an approximating function of the damage evolution with its output regularized using the residue of the differential equation. Three stages of optimization seek the best possible values for the neural network and the material parameters. The training alternates between the fitting of only the pseudoexperimental data or the total loss that includes the regularizing terms. We test the robustness of the method to noisy data and its generalization capabilities using a simple physical case for the damage model. This procedure deals better with noisy data in comparison with a PDEconstrained optimization method, and it also provides good approximations of the material parameters and the evolution of damage.
Modelling vortex ring growth in the wake of a translating cone ; Vortex rings have the ability to transport fluid over long distances. They are usually produced by ejecting a volume of fluid through a circular orifice or nozzle. When the volume and velocity of the ejected fluid are known, the vortex' circulation, impulse, and energy can be estimated by the slug flow model. Vortex rings also form in the wake of accelerating axisymmetric bodies. In this configuration, the volume and velocity of the fluid that is injected into the vortex is not known a priori. Here, we present two models to predict the growth of the vortex behind disks or cones. The first model uses conformal mapping and assumes that all vorticity generated ends up in the vortex. The vortex circulation is determined by imposing the Kutta condition at the tip of the disk. The position of the vortex is integrated from an approximation of its velocity, given by Fraenkel. The model predicts well the maximum circulation of the vortex, but does not predict the tail shedding observed experimentally. A second model is based on an axisymmetric version of the discrete vortex method. The shear layer formed at the tip of the cone is discretised by point vortices, which rollup into a coherent vortex ring. The model accurately captures the temporal evolution of the circulation and the nondimensional energy. The detrainment of vorticity from the vortex, through the process of tail shedding, is qualitatively captured by the model but remains quantitatively sensitive to the numerical parameters.
Direction Dependent Corrections in Polarimetric Radio Imaging III AtoZ Solver Modeling the full Jones antenna aperture illumination pattern ; In this third paper of a series describing direction dependent corrections for polarimetric radio imaging, we present the the AtoZ solver methodology to model the full Jones antenna aperture illumination pattern AIP with Zernike polynomials. In order to achieve thermal noise limited imaging with modern radio interferometers, it is necessary to correct for the instrumental effects of the antenna primary beam PB as a function of time, frequency, and polarization. The wideband AW projection algorithm enables those corrections provided an accurate model of the AIP is available. We present the AtoZ solver as a more versatile algorithm for the modeling of the AIP. It employs the orthonormal circular Zernike polynomial basis to model the measured full Jones AIP. These full Jones models are then used to reconstruct the full Mueller AIP repsonse of an antenna, in principle accounting for all the offaxis leakage effects of the primary beam. The AtoZ solver is general enough to accomodate any interferometer for which holographic measurements exist, we have successfully modelled the AIP of VLA, MeerKAT and ALMA as a demonstration of its versatility. We show that our models capture the PB morphology to high accuracy within the first 12 sidelobes, and show the viability of full Mueller gridding and deconvolution for any telescope given high quality holographic measurements.
Inner spike and slab Bayesian nonparametric models ; Discrete Bayesian nonparametric models whose expectation is a convex linear combination of a point mass at some point of the support and a diffuse probability distribution allow to incorporate strong prior information, while still being extremely flexible. Recent contributions in the statistical literature have successfully implemented such a modelling strategy in a variety of applications, including density estimation, nonparametric regression and modelbased clustering. We provide a thorough study of a large class of nonparametric models we call inner spike and slab hNRMI models, which are obtained by considering homogeneous normalized random measures with independent increments hNRMI with base measure given by a convex linear combination of a point mass and a diffuse probability distribution. In this paper we investigate the distributional properties of these models and our results include i the exchangeable partition probability function they induce, ii the distribution of the number of distinct values in an exchangeable sample, iii the posterior predictive distribution, and iv the distribution of the number of elements that coincide with the only point of the support with positive probability. Our findings are the main building block for an actual implementation of Bayesian inner spike and slab hNRMI models by means of a generalized P'olya urn scheme.
Model Selection for Offline Reinforcement Learning Practical Considerations for Healthcare Settings ; Reinforcement learning RL can be used to learn treatment policies and aid decision making in healthcare. However, given the need for generalization over complex stateaction spaces, the incorporation of function approximators e.g., deep neural networks requires model selection to reduce overfitting and improve policy performance at deployment. Yet a standard validation pipeline for model selection requires running a learned policy in the actual environment, which is often infeasible in a healthcare setting. In this work, we investigate a model selection pipeline for offline RL that relies on offpolicy evaluation OPE as a proxy for validation performance. We present an indepth analysis of popular OPE methods, highlighting the additional hyperparameters and computational requirements fittinginference of auxiliary models when used to rank a set of candidate policies. We compare the utility of different OPE methods as part of the model selection pipeline in the context of learning to treat patients with sepsis. Among all the OPE methods we considered, fitted Q evaluation FQE consistently leads to the best validation ranking, but at a high computational cost. To balance this tradeoff between accuracy of ranking and computational efficiency, we propose a simple twostage approach to accelerate model selection by avoiding potentially unnecessary computation. Our work serves as a practical guide for offline RL model selection and can help RL practitioners select policies using realworld datasets. To facilitate reproducibility and future extensions, the code accompanying this paper is available online at httpsgithub.comMLD3OfflineRLModelSelection.
Adaptation of Tacotron2based TextToSpeech for ArticulatorytoAcoustic Mapping using Ultrasound Tongue Imaging ; For articulatorytoacoustic mapping, typically only limited parallel training data is available, making it impossible to apply fully endtoend solutions like Tacotron2. In this paper, we experimented with transfer learning and adaptation of a Tacotron2 texttospeech model to improve the final synthesis quality of ultrasoundbased articulatorytoacoustic mapping with a limited database. We use a multispeaker pretrained Tacotron2 TTS model and a pretrained WaveGlow neural vocoder. The articulatorytoacoustic conversion contains three steps 1 from a sequence of ultrasound tongue image recordings, a 3D convolutional neural network predicts the inputs of the pretrained Tacotron2 model, 2 the Tacotron2 model converts this intermediate representation to an 80dimensional melspectrogram, and 3 the WaveGlow model is applied for final inference. This generated speech contains the timing of the original articulatory data from the ultrasound recording, but the F0 contour and the spectral information is predicted by the Tacotron2 model. The F0 values are independent of the original ultrasound images, but represent the target speaker, as they are inferred from the pretrained Tacotron2 model. In our experiments, we demonstrated that the synthesized speech quality is more natural with the proposed solutions than with our earlier model.
A onedimensional continuous model for carbon nanotubes ; The continuous twodimensional 2D elastic model for singlewalled carbon nanotubes SWNTs provided by Tu and OuYang in Phys. Rev. B textbf65, 235411 2003 is reduced to a onedimensional 1D curvature elastic model strictly. This model is in accordance with the isotropic Kirchhoff elastic rod theory. Neglecting the inplane strain energy in this model, it is suitable to investigate the nature features of carbon nanotubes CNTs with large deformations and can reduce to the string model in Phys. Rev. Lett. textbf76, 4055 1997 when the deformation is small enough. For straight chiral shapes, this general model indicates that the difference of the chiral angle between two equilibrium states is about pi6, which is consistent with the lattice model. It also reveals that the helical shape has lower energy for per atom than the straight shape has in the same condition. By solving the corresponding equilibrium shape equations, the helical tube solution is in good agreement with the experimental result, and super helical shapes are obtained and we hope they can be found in future experiments.
Twodimensional YangMills theory, Painleve equations and the sixvertex model ; We show that the chiral partition function of twodimensional YangMills theory on the sphere can be mapped to the partition function of the homogeneous sixvertex model with domain wall boundary conditions in the ferroelectric phase. A discrete matrix model description in both cases is given by the Meixner ensemble, leading to a representation in terms of a stochastic growth model. We show that the partition function is a particular case of the zmeasure on the set of Young diagrams, yielding a unitary matrix model for chiral YangMills theory on the sphere and the identification of the partition function as a taufunction of the Painleve V equation. We describe the role played by generalized nonchiral YangMills theory on the sphere in relating the Meixner matrix model to the Toda chain hierarchy encompassing the integrability of the sixvertex model. We also argue that the thermodynamic behaviour of the sixvertex model in the disordered and antiferroelectric phases are captured by particular qdeformations of twodimensional YangMills theory on the sphere.
Bayesian nonparametric estimation and consistency of mixed multinomial logit choice models ; This paper develops nonparametric estimation for discrete choice models based on the mixed multinomial logit MMNL model. It has been shown that MMNL models encompass all discrete choice models derived under the assumption of random utility maximization, subject to the identification of an unknown distribution G. Noting the mixture model description of the MMNL, we employ a Bayesian nonparametric approach, using nonparametric priors on the unknown mixing distribution G, to estimate choice probabilities. We provide an important theoretical support for the use of the proposed methodology by investigating consistency of the posterior distribution for a general nonparametric prior on the mixing distribution. Consistency is defined according to an L1type distance on the space of choice probabilities and is achieved by extending to a regression model framework a recent approach to strong consistency based on the summability of square roots of prior probabilities. Moving to estimation, slightly different techniques for nonpanel and panel data models are discussed. For practical implementation, we describe efficient and relatively easytouse blocked Gibbs sampling procedures. These procedures are based on approximations of the random probability measure by classes of finite stickbreaking processes. A simulation study is also performed to investigate the performance of the proposed methods.
Observational constraint on the interacting dark energy models including the SandageLoeb test ; Two types of interacting dark energy models are investigated using the type Ia supernova SNIa, observational Hz data OHD, cosmic microwave background CMB shift parameter and the secular SandageLoeb SL test. We find that the inclusion of SL test can obviously provide more stringent constraint on the parameters in both models. For the constant coupling model, the interaction term including the SL test is estimated at delta0.01 pm 0.01 1sigma pm 0.02 2sigma, which has been improved to be only a half of original scale on corresponding errors. Comparing with the combination of SNIa and OHD, we find that the inclusion of SL test directly reduces the bestfit of interaction from 0.39 to 0.10, which indicates that the higherredshift observation including the SL test is necessary to track the evolution of interaction. For the varying coupling model, we reconstruct the interaction delta z, and find that the interaction is also negative similar as the constant coupling model. However, for high redshift, the interaction generally vanishes at infinity. The constraint result also shows that the LambdaCDM model still behaves a good fit to the observational data, and the coincidence problem is still quite severe. However, the phantomlike dark energy with wX1 is slightly favored over the LambdaCDM model.
Comparative Study of MHD Modeling of the Background Solar Wind ; Knowledge about the background solar wind plays a crucial role in the framework of space weather forecasting. Insitu measurements of the background solar wind are only available for a few points in the heliosphere where spacecraft are located, therefore we have to rely on heliospheric models to derive the distribution of solar wind parameters in interplanetary space. We test the performance of different solar wind models, namely Magnetohydrodynamic Algorithm outside a SphereENLIL MASENLIL, WangSheeleyArgeENLIL WSAENLIL, and MASMAS, by comparing model results with insitu measurements from spacecraft located at 1 AU distance to the Sun ACE, Wind. To exclude the influence of interplanetary coronal mass ejections ICMEs, we chose the year 2007 as a time period with low solar activity for our comparison. We found that the general structure of the background solar wind is well reproduced by all models. The best model results were obtained for the parameter solar wind speed. However, the predicted arrival times of highspeed solar wind streams have typical uncertainties of the order of about one day. Comparison of model runs with synoptic magnetic maps from different observatories revealed that the choice of the synoptic map significantly affects the model performance.
Halo modelling in chameleon theories ; We analyse modelling techniques for the largescale structure formed in scalartensor theories of constant BransDicke parameter which match the concordance model background expansion history and produce a chameleon suppression of the gravitational modification in highdensity regions. Thereby, we use a mass and environment dependent chameleon spherical collapse model, the ShethTormen halo mass function and linear halo bias, the NavarroFrenkWhite halo density profile, and the halo model. Furthermore, using the spherical collapse model, we extrapolate a chameleon massconcentration scaling relation from a LCDM prescription calibrated to Nbody simulations. We also provide constraints on the model parameters to ensure viability on local scales. We test our description of the halo mass function and nonlinear matter power spectrum against the respective observables extracted from largevolume and highresolution Nbody simulations in the limiting case of fR gravity, corresponding to a vanishing BransDicke parameter. We find good agreement between the two; the halo model provides a good qualitative description of the shape of the relative enhancement of the fR matter power spectrum with respect to LCDM caused by the extra attractive gravitational force but fails to recover the correct amplitude. Introducing an effective linear power spectrum in the computation of the twohalo term to account for an underestimation of the chameleon suppression at intermediate scales in our approach, we accurately reproduce the measurements from the Nbody simulations.
Distinguishing models of reionization using future radio observations of 21cm 1point statistics ; We explore the impact of reionization topology on 21cm statistics. Four reionization models are presented which emulate large ionized bubbles around overdense regions 21CMFAST globalinside out, small ionized bubbles in overdense regions localinsideout, large ionized bubbles around underdense regions globaloutsidein and small ionized bubbles around underdense regions localoutsidein. We show that firstgeneration instruments might struggle to distinguish global models using the shape of the power spectrum alone. All instruments considered are capable of breaking this degeneracy with the variance, which is higher in outsidein models. Global models can also be distinguished at small scales from a boost in the power spectrum from a positive correlation between the density and neutralfraction fields in outsidein models. Negative skewness is found to be unique to insideout models and we find that preSKA instruments could detect this feature in maps smoothed to reduce noise errors. The early, mid and late phases of reionization imprint signatures in the brightnesstemperature moments, we examine their model dependence and find preSKA instruments capable of exploiting these timing constraints in smoothed maps. The dimensional skewness is introduced and is shown to have stronger signatures of the early and midphase timing if the insideout scenario is correct.
Modeling Epidermis Homeostasis and Psoriasis Pathogenesis ; We present a computational model to study the spatiotemporal dynamics of the epidermis homeostasis under normal and pathological conditions. The model consists of a population kinetics model of the central transition pathway of keratinocyte proliferation, differentiation and loss and an agentbased model that propagates cell movements and generates the stratified epidermis. The model recapitulates observed homeostatic cell density distribution, the epidermal turnover time and the multilayered tissue structure. We extend the model to study the onset, recurrence and phototherapyinduced remission of psoriasis. The model considers the psoriasis as a parallel homeostasis of normal and psoriatic keratinocytes originated from a shared stemcell niche environment and predicts two homeostatic modes of the psoriasis a disease mode and a quiescent mode. Interconversion between the two modes can be controlled by interactions between psoriatic stem cells and the immune system and by the normal and psoriatic stem cells competing for growth niches. The prediction of a quiescent state potentially explains the efficacy of the multiepisode UVB irradiation therapy and recurrence of psoriasis plaques, which can further guide designs of therapeutics that specifically target the immune system andor the keratinocytes.
The Gammacount distribution in the analysis of experimental underdispersed data ; Event counts are response variables with nonnegative integer values representing the number of times that an event occurs within a fixed domain such as a time interval, a geographical area or a cell of a contingency table. Analysis of counts by Gaussian regression models ignores the discreteness, asymmetry and heterocedasticity and is inefficient, providing unrealistic standard errors or possibily negative predictions of the expected number of events. The Poisson regression is the standard model for count data with underlying assumptions on the generating process which may be implausible in many applications. Statisticians have long recognized the limitation of imposing equidispersion under the Poisson regression model. A typical situation is when the conditional variance exceeds the conditional mean, in which case models allowing for overdispersion are routinely used. Less reported is the case of underdispersion with fewer modelling alternatives and assessments available in the literature. One of such alternatives, the Gammacount model, is adopted here in the analysis of an agronomic experiment designed to investigate the effect of levels of defoliation on different phenological states upon the number of cotton bolls. Results show improvements over the Poisson model and the semiparametric quasiPoisson model in capturing the observed variability in the data. Estimating rather than assuming the underlying variance process lead to important insights into the process.
Freeenergy bounds for hierarchical spin models ; In this paper we study two nonmeanfield spin models built on a hierarchical lattice The hierarchical EdwardAnderson model HEA of a spin glass, and Dyson's hierarchical model DHM of a ferromagnet. For the HEA, we prove the existence of the thermodynamic limit of the free energy and the replicasymmetrybreaking RSB freeenergy bounds previously derived for the SherringtonKirkpatrick model of a spin glass. These RSB meanfield bounds are exact only if the orderparameter fluctuations OPF vanish Given that such fluctuations are not negligible in nonmeanfield models, we develop a novel strategy to tackle part of OPF in hierarchical models. The method is based on absorbing part of OPF of a block of spins into an effective Hamiltonian of the underlying spin blocks. We illustrate this method for DHM and show that, compared to the meanfield bound for the free energy, it provides a tighter nonmeanfield bound, with a critical temperature closer to the exact one. To extend this method to the HEA model, a suitable generalization of Griffith's correlation inequalities for Ising ferromagnets is needed Since correlation inequalities for spin glasses are still an open topic, we leave the extension of this method to hierarchical spin glasses as a future perspective.
The Best Inflationary Models After Planck ; We compute the Bayesian evidence and complexity of 193 slowroll singlefield models of inflation using the Planck 2013 Cosmic Microwave Background data, with the aim of establishing which models are favoured from a Bayesian perspective. Our calculations employ a new numerical pipeline interfacing an inflationary effective likelihood with the slowroll library ASPIC and the nested sampling algorithm MULTINEST. The models considered represent a complete and systematic scan of the entire landscape of inflationary scenarios proposed so far. Our analysis singles out the most probable models from an Occam's razor point of view that are compatible with Planck data, while ruling out with very strong evidence 34 of the models considered. We identify 26 of the models that are favoured by the Bayesian evidence, corresponding to 15 different potential shapes. If the Bayesian complexity is included in the analysis, only 9 of the models are preferred, corresponding to only 9 different potential shapes. These shapes are all of the plateau type.
A class of transient acceleration models consistent with Big Bang Cosmology ; Is it possible that the current cosmic accelerating expansion will turn into a decelerating one Can this transition be realized by some viable theoretical model that is consistent with the standard Big Bang cosmology We study a class of phenomenological models of a transient acceleration, based on a dynamical dark energy with a very general form of equation of state pdealpharhodebetarhodem. It mimics the cosmological constant rhoderightarrow const for small scale factor a, and behaves as a barotropic gas with rhoderightarrow a3alpha1 with alphage 0 for large a. The cosmic evolution of four models in the class has been examined in details, and all yields a smooth transient acceleration. Depending on the specific model, the future universe may be dominated either by the dark energy or by the matter. In two models, the dynamical dark energy can be explicitly realized by a scalar field with an analytical potential Vphi. Moreover, the statistical analysis shows that the models can be as robust as LambdaCDM in confronting the observational data of SN Ia, CMB, and BAO. As improvements over the previous studies, our models overcome the overabundance problem of dark energy during early eras, and satisfy the constraints on the dark energy from WMAP observations of CMB.
Constraints on composite quark partners from Higgs searches ; In composite Higgs models, the generation of quark masses requires the standard modellike quarks to be partially or fully composite states which are accompanied by composite quark partners. The composite quark partners decay into a standard modellike quark and an electroweak gauge boson or Higgs boson, which can be searched for at the LHC. In this article, we study the phenomenological implications of composite quarks in the minimal composite Higgs model based on the coset SO5SO4. We focus on light quark partners which are embedded in the SO4 singlet representation. In this case, a dominant decay mode of the partner quark is into a Higgs boson and a jet, for which no experimental bounds have been established so far. The presence of SO4 singlet partners leads to an enhancement of the diHiggs production cross section at the LHC. This will be an interesting experimental signature in the near future, but, unfortunately, there are no direct bounds available yet from the experimental analyses. However, we find that the currently available standard modellike Higgs searches can be used in order to obtain the first constraints on partially and fully composite quark models with light quark partners in the SO4 singlet. We obtain a flavor and composition parameter independent bound on the quark partner mass of MUh 310 mbox GeV for partially composite quark models and MUh 212 mbox GeV for fully composite quark models.
Robust EM algorithm for modelbased curve clustering ; Modelbased clustering approaches concern the paradigm of exploratory data analysis relying on the finite mixture model to automatically find a latent structure governing observed data. They are one of the most popular and successful approaches in cluster analysis. The mixture density estimation is generally performed by maximizing the observeddata loglikelihood by using the expectationmaximization EM algorithm. However, it is wellknown that the EM algorithm initialization is crucial. In addition, the standard EM algorithm requires the number of clusters to be known a priori. Some solutions have been provided in 31, 12 for modelbased clustering with Gaussian mixture models for multivariate data. In this paper we focus on modelbased curve clustering approaches, when the data are curves rather than vectorial data, based on regression mixtures. We propose a new robust EM algorithm for clustering curves. We extend the modelbased clustering approach presented in 31 for Gaussian mixture models, to the case of curve clustering by regression mixtures, including polynomial regression mixtures as well as spline or Bspline regressions mixtures. Our approach both handles the problem of initialization and the one of choosing the optimal number of clusters as the EM learning proceeds, rather than in a twofold scheme. This is achieved by optimizing a penalized loglikelihood criterion. A simulation study confirms the potential benefit of the proposed algorithm in terms of robustness regarding initialization and funding the actual number of clusters.
Quantum Pushdown Automata with a Garbage Tape ; Several kinds of quantum pushdown automaton models have been proposed, and their computational power is investigated intensively. However, for some quantum pushdown automaton models, it is not known whether quantum models are at least as powerful as classical counterparts or not. This is due to the reversibility restriction. In this paper, we introduce a new quantum pushdown automaton model that has a garbage tape. This model can overcome the reversibility restriction by exploiting the garbage tape to store popped symbols. We show that the proposed model can simulate any quantum pushdown automaton with a classical stack as well as any probabilistic pushdown automaton. We also show that our model can solve a certain promise problem exactly while deterministic pushdown automata cannot. These results imply that our model is strictly more powerful than classical counterparts in the setting of exact, onesided error and nondeterministic computation.
Novel Dynamical Phenomena in Magnetic systems ; Dynamics of Ising models is a much studied phenomenon and has emerged as a rich field of presentday research. An important dynamical feature commonly studied is the quenching phenomenon below the critical temperature. In this thesis we have studied the zero temperature quenching dynamics of different Ising spin systems. First we have studied the zero temperature quenching dynamics of two dimensional Ising spin system with competating interactions. Then we have studied the effect of randomness or disorder on the quenching dynamics of Ising spin system. We have studied the effect of the nature of randomness on zero temperature quenching dynamics of one dimensional Ising model on two type of complex networks. A model for opinion dynamics also has been proposed in this thesis, in which the binary opinions of the individuals are determined according to the size of their neighboring domains. This model can be equivalently defined in terms of Ising spin variables and the various quantities studied have one to one correspondence with magnetic systems. Introducing disorder in this model through a parameter called rigidity parameter rho probability that people are completely rigid and never change their opinion, the transition to a heterogeneous society at rho 0 is obtained. The Model Model I has been generalized introducing a parameter named as size sensitivity parameter to modify the dynamics of the proposed model and a macroscopic crossover in time is observed for the intermediate values of this parameter.
Prospects of probing quintessence with HI 21cm intensity mapping survey ; We investigate the prospect of constraining scalar field dark energy models using HI 21cm intensity mapping surveys. We consider a wide class of coupled scalar field dark energy models whose predictions about the background cosmological evolution are different from the LambdaCDM predictions by a few percent. We find that these models can be statistically distinguished from LambdaCDM through their imprint on the 21cm angular power spectrum. At the fiducial z 1.5, corresponding to a radio interferometric observation of the postreionization HI 21 cm observation at frequency 568 rm MHz, these models can infact be distinguished from the LambdaCDM model at rm SNR 3 sigma level using a 10,000 hr radio observation distributed over 40 pointings of a SKA1mid like radiotelescope. We also show that tracker models are more likely to be ruled out in comparison with LambdaCDM than the thawer models. Future radio observations can be instrumental in obtaining tighter constraints on the parameter space of dark energy models and supplement the bounds obtained from background studies.
Sick, the spectroscopic inference crank ; There exists an inordinate amount of spectral data in both public and private astronomical archives which remain severely underutilised. The lack of reliable opensource tools for analysing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this Article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick can be used to provide a nearestneighbour estimate of model parameters, a numerically optimised point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalise on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical and nuisance quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multidimensional interpolation, or a Cannonbased model. Additional phenomena that transform the data e.g., redshift, rotational broadening, continuum, spectral resolution are incorporated as free parameters and can be marginalised away. Outlier pixels e.g., cosmic rays or poorly modelled regimes can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalarjustified, quantitative model permits precise inferences with credible uncertainties on noisy data. Using a forward model on lowresolution, high SN spectra of M67 stars reveals atomic diffusion processes on the order of 0.05 dex, previously only measurable with differential analysis techniques in highresolution spectra. abridged
Infinitedimensional Bayesian approach for inverse scattering problems of a fractional Helmholtz equation ; In this paper, we focus on a new wave equation described wave propagation in the attenuation medium. In the first part of this paper, based on the timedomain space fractional wave equation, we formulate the frequencydomain equation named as fractional Helmholtz equation. According to the physical interpretations, this new model could be divided into two separate models lossdominated model and dispersiondominated model. For the lossdominated model it is an integer and fractionalorder mixed elliptic equation, a wellposedness theory has been established and the Lipschitz continuity of the scattering field with respect to the scatterer has also been established.Because the complexity of the dispersiondominated model it is an integer and fractionalorder mixed elliptic system, we only provide a wellposedness result for sufficiently small wavenumber. In the second part of this paper, we generalize the Bayesian inverse theory in infinitedimension to allow a part of the noise depends on the target function the function needs to be estimated. Then, we prove that the estimated function tends to be the true function if both the model reduction error and the white noise vanish. At last, our theory has been applied to the lossdominated model with absorbing boundary condition.
Search for Higgs portal DM at the ILC ; Higgs portal dark matter DM models are simple interesting and viable DM models. There are three types of the models depending on the DM spin scalar, fermion and vector DM models. In this paper, we consider renormalizable, unitary and gauge invariant Higgs portal DM models, and study how large parameter regions can be surveyed at the International Linear Collider ILC experiment at sqrts500 GeV. For the Higgs portal singlet fermion and vector DM cases, the force mediator involves two scalar propagators, the SMlike Higgs boson and the dark Higgs boson. We show that their interference generates interesting and important patterns in the monoZ plus missing ET signatures at the ILC, and the results are completely different from those obtained from the Higgs portal DM models within the effective field theories. In addition, we show that it would be possible to distinguish the spin of DM in the Higgs portal scenarios, if the shape of the recoilmass distribution is observed. We emphasize that the interplay between these collider observations and those in the direct detection experiments has to be performed in the model with renomalizability and unitarity to combine the model analyses in different scales.
Towards the Emulation of the Cardiac Conduction System for Pacemaker Testing ; The heart is a vital organ that relies on the orchestrated propagation of electrical stimuli to coordinate each heart beat. Abnormalities in the heart's electrical behaviour can be managed with a cardiac pacemaker. Recently, the closedloop testing of pacemakers with an emulation realtime simulation of the heart has been proposed. An emulated heart would provide realistic reactions to the pacemaker as if it were a real heart. This enables developers to interrogate their pacemaker design without having to engage in costly or lengthy clinical trials. Many highfidelity heart models have been developed, but are too computationally intensive to be simulated in realtime. Heart models, designed specifically for the closedloop testing of pacemakers, are too abstract to be useful in the testing of physical pacemakers. In the context of pacemaker testing, this paper presents a more computationally efficient heart model that generates realistic continuoustime electrical signals. The heart model is composed of cardiac cells that are connected by paths. Significant improvements were made to an existing cardiac cell model to stabilise its activation behaviour and to an existing path model to capture the behaviour of continuous electrical propagation. We provide simulation results that show our ability to faithfully model complex reentrant circuits that cause arrhythmia that existing heart models can not.
Switching Economics for Physics and the Carbon Price Inflation Problems in Integrated Assessment Models and their Implications ; Integrated Assessment Models IAMs are mainstay tools for assessing the longterm interactions between climate and the economy and for deriving optimal policy responses in the form of carbon prices. IAMs have been criticized for controversial discount rate assumptions, arbitrary climate damage functions, and the inadequate handling of potentially catastrophic climate outcomes. We review these external shortcomings for prominent IAMs before turning our focus on an internal modeling fallacy the widespread misapplication of the Constant Elasticity of Substitution CES function for the technology transitions modeled by IAMs. Applying CES, an economic modeling approach, on technical factor inputs over long periods where an entire factor the greenhouse gas emitting fossil fuel inputs must be substituted creates artifacts that fail to match the Scurve patterns observed historically. A policy critical result, the monotonically increasing cost of carbon, a universal feature of IAMs, is called into question by showing that it is unrealistic as it is an artifact of the modeling approach and not representative of the technical substitutability potential nor of the expected cost of the technologies. We demonstrate this first through a simple but representative example of CES application on the energy system and with a sectoral discussion of the actual fossil substitution costs. We propose a methodological modification using dynamically varying elasticity of substitution as a plausible alternative to model the energy transition in line with the historical observations and technical realities within the existing modeling systems. Nevertheless, a fundamentally different approach based on physical energy principles would be more appropriate.
Bisous model detecting filamentary patterns in point processes ; The cosmic web is a highly complex geometrical pattern, with galaxy clusters at the intersection of filaments and filaments at the intersection of walls. Identifying and describing the filamentary network is not a trivial task due to the overwhelming complexity of the structure, its connectivity and the intrinsic hierarchical nature. To detect and quantify galactic filaments we use the Bisous model, which is a marked point process built to model multidimensional patterns. The Bisous filament finder works directly with the galaxy distribution data and the model intrinsically takes into account the connectivity of the filamentary network. The Bisous model generates the visit map the probability to find a filament at a given point together with the filament orientation field. Using these two fields, we can extract filament spines from the data. Together with this paper we publish the computer code for the Bisous model that is made available in GitHub. The Bisous filament finder has been successfully used in several cosmological applications and further development of the model will allow to detect the filamentary network also in photometric redshift surveys, using the full redshift posterior. We also want to encourage the astrostatistical community to use the model and to connect it with all other existing methods for filamentary pattern detection and characterisation.
Fundamental properties of cooperative contagion processes ; We investigate the effects of cooperativity between contagion processes that spread and persist in a host population. We propose and analyze a dynamical model in which individuals that are affected by one transmissible agent A exhibit a higher than baseline propensity of being affected by a second agent B and vice versa. The model is a natural extension of the traditional SIS SusceptibleInfectedSusceptible model used for modeling single contagion processes. We show that cooperativity changes the dynamics of the system considerably when cooperativity is strong. The system exhibits discontinuous phase transitions not observed in single agent contagion, multistability, a separation of the traditional epidemic threshold into different thresholds for inception and extinction as well as hysteresis. These properties are robust and are corroborated by stochastic simulations on lattices and generic network topologies. Finally, we investigate wave propagation and transients in a spatially extended version of the model and show that especially for intermediate values of baseline reproduction ratios the system is characterized by various types of wavefront speeds. The system can exhibit spatially heterogeneous stationary states for some parameters and negative front speeds receding wave fronts. The two agent model can be employed as a starting point for more complex contagion processes, involving several interacting agents, a model framework particularly suitable for modeling the spread and dynamics of microbiological ecosystems in host populations.
Decoupled Neural Interfaces using Synthetic Gradients ; Training directed neural networks typically requires forwardpropagating data through a computation graph, followed by backpropagating error signal, to produce weight updates. All layers, or more generally, modules, of the network are therefore locked, in the sense that they must wait for the remainder of the network to execute forwards and propagate error backwards before they can be updated. In this work we break this constraint by decoupling modules by introducing a model of the future computation of the network graph. These models predict what the result of the modelled subgraph will produce using only local information. In particular we focus on modelling error gradients by using the modelled synthetic gradient in place of true backpropagated error gradients we decouple subgraphs, and can update them independently and asynchronously i.e. we realise decoupled neural interfaces. We show results for feedforward models, where every layer is trained asynchronously, recurrent neural networks RNNs where predicting one's future gradient extends the time over which the RNN can effectively model, and also a hierarchical RNN system with ticking at different timescales. Finally, we demonstrate that in addition to predicting gradients, the same framework can be used to predict inputs, resulting in models which are decoupled in both the forward and backwards pass amounting to independent networks which colearn such that they can be composed into a single functioning corporation.
Learning Deep Convolutional Networks for Demosaicing ; This paper presents a comprehensive study of applying the convolutional neural network CNN to solving the demosaicing problem. The paper presents two CNN models that learn endtoend mappings between the mosaic samples and the original image patches with full information. In the case the Bayer color filter array CFA is used, an evaluation with ten competitive methods on popular benchmarks confirms that the datadriven, automatically learned features by the CNN models are very effective. Experiments show that the proposed CNN models can perform equally well in both the sRGB space and the linear space. It is also demonstrated that the CNN model can perform joint denoising and demosaicing. The CNN model is very flexible and can be easily adopted for demosaicing with any CFA design. We train CNN models for demosaicing with three different CFAs and obtain better results than existing methods. With the great flexibility to be coupled with any CFA, we present the first datadriven joint optimization of the CFA design and the demosaicing method using CNN. Experiments show that the combination of the automatically discovered CFA pattern and the automatically devised demosaicing method significantly outperforms the current best demosaicing results. Visual comparisons confirm that the proposed methods reduce more visual artifacts than existing methods. Finally, we show that the CNN model is also effective for the more general demosaicing problem with spatially varying exposure and color and can be used for taking images of higher dynamic ranges with a single shot. The proposed models and the thorough experiments together demonstrate that CNN is an effective and versatile tool for solving the demosaicing problem.
Combining Linear NonGaussian Acyclic Model with Logistic Regression Model for Estimating Causal Structure from Mixed Continuous and Discrete Data ; Estimating causal models from observational data is a crucial task in data analysis. For continuousvalued data, Shimizu et al. have proposed a linear acyclic nonGaussian model to understand the data generating process, and have shown that their model is identifiable when the number of data is sufficiently large. However, situations in which continuous and discrete variables coexist in the same problem are common in practice. Most existing causal discovery methods either ignore the discrete data and apply a continuousvalued algorithm or discretize all the continuous data and then apply a discrete Bayesian network approach. These methods possibly loss important information when we ignore discrete data or introduce the approximation error due to discretization. In this paper, we define a novel hybrid causal model which consists of both continuous and discrete variables. The model assumes 1 the value of a continuous variable is a linear function of its parent variables plus a nonGaussian noise, and 2 each discrete variable is a logistic variable whose distribution parameters depend on the values of its parent variables. In addition, we derive the BIC scoring function for model selection. The new discovery algorithm can learn causal structures from mixed continuous and discrete data without discretization. We empirically demonstrate the power of our method through thorough simulations.
Statistical Link Label Modeling for Sign Prediction Smoothing Sparsity by Joining Local and Global Information ; One of the major issues in signed networks is to use network structure to predict the missing sign of an edge. In this paper, we introduce a novel probabilistic approach for the sign prediction problem. The main characteristic of the proposed models is their ability to adapt to the sparsity level of an input network. The sparsity of networks is one of the major reasons for the poor performance of many link prediction algorithms, in general, and sign prediction algorithms, in particular. Building a model that has an ability to adapt to the sparsity of the data has not yet been considered in the previous related works. We suggest that there exists a dilemma between local and global structures and attempt to build sparsity adaptive models by resolving this dilemma. To this end, we propose probabilistic prediction models based on local and global structures and integrate them based on the concept of smoothing. The model relies more on the global structures when the sparsity increases, whereas it gives more weights to the information obtained from local structures for low levels of the sparsity. The proposed model is assessed on three realworld signed networks, and the experiments reveal its consistent superiority over the state of the art methods. As compared to the previous methods, the proposed model not only better handles the sparsity problem, but also has lower computational complexity and can be updated using realtime data streams.
Frequencydomain gravitational waveform models for inspiraling binary neutron stars ; We develop a model for frequencydomain gravitational waveforms from inspiraling binary neutron stars. Our waveform model is calibrated by comparison with hybrid waveforms constructed from our latest highprecision numericalrelativity waveforms and the SEOBNRv2T waveforms in the frequency range of 101000,rm Hz. We show that the phase difference between our waveform model and the hybrid waveforms is always smaller than 0.1, rm rad for the binary tidal deformability, tilde Lambda, in the range 300lesssimtilde Lambdalesssim1900 and for the mass ratio between 0.73 and 1. We show that, for 101000,rm Hz, the distinguishability for the signaltonoise ratio lesssim50 and the mismatch between our waveform model and the hybrid waveforms are always smaller than 0.25 and 1.1times105, respectively. The systematic error of our waveform model in the measurement of tilde Lambda is always smaller than 20 with respect to the hybrid waveforms for 300lesssimtilde Lambdalesssim1900. The statistical error in the measurement of binary parameters is computed employing our waveform model, and we obtain results consistent with the previous studies. We show that the systematic error of our waveform model is always smaller than 20 typically smaller than 10 of the statistical error for events with the signaltonoise ratio of 50.
Modeling noise and error correction for Majoranabased quantum computing ; Majoranabased quantum computing seeks to use the nonlocal nature of Majorana zero modes to store and manipulate quantum information in a topologically protected way. While noise is anticipated to be significantly suppressed in such systems, finite temperature and system size result in residual errors. In this work, we connect the underlying physical error processes in Majoranabased systems to the noise models used in a fault tolerance analysis. Standard qubitbased noise models built from Pauli operators do not capture leading order noise processes arising from quasiparticle poisoning events, thus it is not obvious it a priori that such noise models can be usefully applied to a Majoranabased system. We develop stochastic Majorana noise models that are generalizations of the standard qubitbased models and connect the error probabilities defining these models to parameters of the physical system. Using these models, we compute pseudothresholds for the d5 BaconShor subsystem code. Our results emphasize the importance of correlated errors induced in multiqubit measurements. Moreover, we find that for sufficiently fast quasiparticle relaxation the errors are well described by Pauli operators. This work bridges the divide between physical errors in Majoranabased quantum computing architectures and the significance of these errors in a quantum error correcting code.
fMRI Semantic Category Decoding using Linguistic Encoding of Word Embeddings ; The dispute of how the human brain represents conceptual knowledge has been argued in many scientific fields. Brain imaging studies have shown that the spatial patterns of neural activation in the brain are correlated with thinking about different semantic categories of words for example, tools, animals, and buildings or when viewing the related pictures. In this paper, we present a computational model that learns to predict the neural activation captured in functional magnetic resonance imaging fMRI data of test words. Unlike the models with handcrafted features that have been used in the literature, in this paper we propose a novel approach wherein decoding models are built with features extracted from popular linguistic encodings of Word2Vec, GloVe, MetaEmbeddings in conjunction with the empirical fMRI data associated with viewing several dozen concrete nouns. We compared these models with several other models that use word features extracted from FastText, Randomlygenerated features, Mitchell's 25 features 1. The experimental results show that the predicted fMRI images using MetaEmbeddings meet the stateoftheart performance. Although models with features from GloVe and Word2Vec predict fMRI images similar to the stateoftheart model, model with features from MetaEmbeddings predicts significantly better. The proposed scheme that uses popular linguistic encoding offers a simple and easy approach for semantic decoding from fMRI experiments.
GroupReduce BlockWise LowRank Approximation for Neural Language Model Shrinking ; Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require realtime responses. As a case study, a stateoftheart neural language model usually consists of one or more recurrent layers sandwiched between an embedding layer used for representing input tokens and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves stateof theart performance on the OneBillionWord OBW dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90 of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabularypartition block based lowrank matrix approximation and the inherent frequency distribution of tokens the powerlaw distribution of words. The experimental results show our method can significantly outperform traditional compression methods such as lowrank approximation and pruning. On the OBW dataset, our method achieved 6.6 times compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26 times compression rate, which translates to a factor of 12.8 times compression for the entire model with very little degradation in perplexity.
Inferring Metapopulation Propagation Network for Intracity Epidemic Control and Prevention ; Since the 21st century, the global outbreaks of infectious diseases such as SARS in 2003, H1N1 in 2009, and H7N9 in 2013, have become the critical threat to the public health and a hunting nightmare to the government. Understanding the propagation in largescale metapopulations and predicting the future outbreaks thus become crucially important for epidemic control and prevention. In the literature, there have been a bulk of studies on modeling intracity epidemic propagation but with the single population assumption homogeneity. Some recent works on metapopulation propagation, however, focus on finding specific human mobility physical networks to approximate diseases transmission networks, whose generality to fit different diseases cannot be guaranteed. In this paper, we argue that the intracity epidemic propagation should be modeled on a metapopulation base, and propose a twostep method for this purpose. The first step is to understand the propagation system by inferring the underlying disease infection network. To this end, we propose a novel network inference model called D2PRI, which reduces the individual network into a subpopulation network without information loss, and incorporates the powerlaw distribution prior and data prior for better performance. The second step is to predict the disease propagation by extending the classic SIR model to a metapopulation SIR model that allows visitors transmission between any two subpopulations. The validity of our model is testified on a reallife clinical report data set about the airborne disease in the Shenzhen city, China. The D2PRI model with the extended SIR model exhibit superior performance in various tasks including network inference, infection prediction and outbreaks simulation.
Multiobjective Modelbased Policy Search for Dataefficient Learning with Sparse Rewards ; The most dataefficient algorithms for reinforcement learning in robotics are modelbased policy search algorithms, which alternate between learning a dynamical model of the robot and optimizing a policy to maximize the expected return given the model and its uncertainties. However, the current algorithms lack an effective exploration strategy to deal with sparse or misleading reward scenarios if they do not experience any state with a positive reward during the initial random exploration, it is very unlikely to solve the problem. Here, we propose a novel modelbased policy search algorithm, MultiDEX, that leverages a learned dynamical model to efficiently explore the task space and solve tasks with sparse rewards in a few episodes. To achieve this, we frame the policy search problem as a multiobjective, modelbased policy optimization problem with three objectives 1 generate maximally novel state trajectories, 2 maximize the expected return and 3 keep the system in statespace regions for which the model is as accurate as possible. We then optimize these objectives using a Paretobased multiobjective optimization algorithm. The experiments show that MultiDEX is able to solve sparse reward scenarios with a simulated robotic arm in much lower interaction time than VIME, TRPO, GEPPG, CMAES and BlackDROPS.
Parameter redundancy in Type III functional response models with consumer interference ; The consumption rate is a process critically important for the stability of consumerresource systems and the persistence, sustainability and biodiversity of complex food webs. Its mathematical description in the form of functional response equations is a key problem for describing all trophic interactions. Because some of the functional response models used in this study presented redundancy between its parameters two methods were used to check for parameter redundancy the Hessian matrix calculation using automatic differentiation AD which calculates derivatives numerically, but does not use finite differences, and the symbolic method that calculates a derivative matrix and its rank. In this work, we found that the models that better describe the functional response of a rotifer is consumer dependant even at low consumer densities, but their parameters can not be estimated simultaneously because parameter redundancy. This means that fewer parameters or parameter combination can be estimated than the original number of parameters in the models. Here, the model parameters that incorporate intraspecific competition by interference in the consumerresource interaction are not identifiable, suggesting that this problem may be more widespread than is generally appreciated in the literature of food webs. Including knowledge on competitive interactions in current model predictions will be a necessity in the next years for ecology as ecological models are getting more complex and more real. Identifiability of biological parameters in nonlinear ecological models will be an issue to consider.
A universal tensor network algorithm for any infinite lattice ; We present a general graphbased Projected EntangledPair State gPEPS algorithm to approximate ground states of nearestneighbor local Hamiltonians on any lattice or graph of infinite size. By introducing the structuralmatrix which codifies the details of tensor networks on any graphs in any dimension d, we are able to produce a code that can be essentially launched to simulate any lattice. We further introduce an optimized algorithm to compute simple tensor updates as well as expectation values and correlators with a meanfieldlike effective environments. Though not being variational, this strategy allows to cope with PEPS of very large bond dimension e.g., D100, and produces remarkably accurate results in the thermodynamic limit in many situations, and specially when the correlation length is small and the connectivity of the lattice is large. We prove the validity of our approach by benchmarking the algorithm against known results for several models, i.e., the antiferromagnetic Heisenberg model on a chain, star and cubic lattices, the hardcore BoseHubbard model on square lattice, the ferromagnetic Heisenberg model in a field on the pyrochlore lattice, as well as the 3state quantum Potts model in field on the kagome lattice and the spin1 bilinearbiquadratic Heisenberg model on the triangular lattice. We further demonstrate the performance of gPEPS by studying the quantum phase transition of the 2d quantum Ising model in transverse magnetic field on the square lattice, and the phase diagram of the KitaevHeisenberg model on the hyperhoneycomb lattice. Our results are in excellent agreement with previous studies.
Parameter identifiability of a respiratory mechanics model in an idealized preterm infant ; The complexity of mathematical models describing respiratory mechanics has grown in recent years to integrate with cardiovascular models and incorporate nonlinear dynamics. However, additional model complexity has rarely been studied in the context of patientspecific observable data. This study investigates parameter identification of a previously developed nonlinear respiratory mechanics model Ellwein Fix, PLoS ONE 2018 tuned to the physiology of 1 kg preterm infant, using local deterministic sensitivity analysis, subset selection, and gradientbased optimization. The model consists of 4 differential state equations with 31 parameters to predict airflow and dynamic pulmonary volumes and pressures generated under six simulation conditions. The relative sensitivity solutions of the model state equations with respect to each of the parameters were calculated with finite differences and a sensitivity ranking was created for each parameter and simulation. Subset selection identified a set of independent parameters that could be estimated for all six simulations. The combination of these analyses produced a subset of 6 independent sensitive parameters that could be estimated given idealized clinical data. All optimizations performed using pseudodata with perturbed nominal parameters converged within 40 iterations and estimated parameters within 8 of nominal values on average. This analysis indicates the feasibility of performing parameter estimation on real patientspecific data set described by a nonlinear respiratory mechanics model for studying dynamics in preterm infants.
Impact of perception models on friendship paradox and opinion formation ; Topological heterogeneities of social networks have a strong impact on the individuals embedded in those networks. One of the interesting phenomena driven by such heterogeneities is the friendship paradox FP, stating that the mean degree of one's neighbors is larger than the degree of oneself. Alternatively, one can use the median degree of neighbors as well as the fraction of neighbors having a higher degree than oneself. Each of these reflects on how people perceive their neighborhoods, i.e., their perception models, hence how they feel peer pressure. In our paper, we study the impact of perception models on the FP by comparing three versions of the perception model in networks generated with a given degree distribution and a tunable degreedegree correlation or assortativity. The increasing assortativity is expected to decrease networklevel peer pressure, while we find a nontrivial behavior only for the meanbased perception model. By simulating opinion formation, in which the opinion adoption probability of an individual is given as a function of individual peer pressure, we find that it takes the longest time to reach consensus when individuals adopt the medianbased perception model, compared to other versions. Our findings suggest that one needs to consider the proper perception model for better modeling human behaviors and social dynamics.
Observational Viability of an Inflation Model with EModel nonMinimal Derivative Coupling ; By starting with a twofields model in which the fields and their derivatives are nonminimally coupled to gravity, and then by using a conformal gauge, we obtain a model in which the derivatives of the canonically normalized field are nonminimally coupled to gravity. By adopting some appropriate functions, we study two cases with constant and Emodel nonminimal derivative coupling, while the potential in both cases is chosen to be Emodel one. We show that in contrary to the single field alphaattractor model that there is an attractor textitpoint in the large N and small alpha limits, in our setup and for both mentioned cases there is an attractor emphline in these limits that the rns trajectories tend to. By studying the linear and nonlinear perturbations in this setup and comparing the numerical results with Planck2015 observational data, we obtain some constraints on the free parameter alpha. We show that by considering the Emodel potential and coupling function, the model is observationally viable for all values of M mass scale of the model. We use the observational constraints on the tensortoscalar ratio and the consistency relation to obtain some constraints on the sound speed of the perturbations in this model. As a result, we show that in a nonminimal derivative alphaattractor model, it is possible to have small sound speed and therefore large nonGaussianity.
A versatile lattice Boltzmann model for immiscible ternary fluid flows ; We propose a lattice Boltzmann colorgradient model for immiscible ternary fluid flows, which is applicable to the fluids with a full range of interfacial tensions, especially in nearcritical and critical states. An interfacial force for Nphase systems is derived based on the previously developed perturbation operator and is then introduced into the model using a body force scheme, which helps reduce spurious velocities. A generalized recoloring algorithm is applied to produce phase segregation and ensure immiscibility of three different fluids, where a novel form of segregation parameters is proposed by considering the existence of Neumann's triangle and the effect of equilibrium contact angle in threephase junction. The proposed model is first validated with three typical examples, namely the interface capturing for two separate static droplets, the YoungLaplace test for a compound droplet, and the spreading of a droplet between two stratified fluids. This model is then used to study the structure and stability of double droplets in a static matrix. Consistent with the theoretical stability diagram, seven possible equilibrium morphologies are successfully reproduced by adjusting two ratios of the interfacial tensions. By simulating Janus droplets in various geometric configurations, the model is shown to be accurate when three interfacial tensions satisfy a Neumann's triangle. In addition, we also simulate the nearcritical and critical states of double droplets where the outcomes are very sensitive to the model accuracy. Our results show that the present model is advantageous to threephase flow simulations, and allows for accurate simulation of nearcritical and critical states.
Interactive Agent Modeling by Learning to Probe ; The ability of modeling the other agents, such as understanding their intentions and skills, is essential to an agent's interactions with other agents. Conventional agent modeling relies on passive observation from demonstrations. In this work, we propose an interactive agent modeling scheme enabled by encouraging an agent to learn to probe. In particular, the probing agent i.e. a learner learns to interact with the environment and with a target agent i.e., a demonstrator to maximize the change in the observed behaviors of that agent. Through probing, rich behaviors can be observed and are used for enhancing the agent modeling to learn a more accurate mind model of the target agent. Our framework consists of two learning processes i imitation learning for an approximated agent model and ii pure curiositydriven reinforcement learning for an efficient probing policy to discover new behaviors that otherwise can not be observed. We have validated our approach in four different tasks. The experimental results suggest that the agent model learned by our approach i generalizes better in novel scenarios than the ones learned by passive observation, random probing, and other curiositydriven approaches do, and ii can be used for enhancing performance in multiple applications including distilling optimal planning to a policy net, collaboration, and competition. A video demo is available at httpswww.dropbox.coms8mz6rd3349tso67ProbingDemo.movdl0
Retrofit Control with Approximate Environment Modeling ; In this paper, we develop a retrofit control method with approximate environment modeling. Retrofit control is a modular control approach for a general stable network system whose subsystems are supposed to be managed by their corresponding subsystem operators. From the standpoint of a single subsystem operator who performs the design of a retrofit controller, the subsystems managed by all other operators can be regarded as an environment, the complete system model of which is assumed not to be available. The proposed retrofit control with approximate environment modeling has an advantage that the stability of the resultant control system is robustly assured regardless of not only the stability of approximate environment models, but also the magnitude of modeling errors, provided that the network system before implementing retrofit control is originally stable. This robustness property is practically significant to incorporate existing identification methods of unknown environments, because the accuracy of identified models may neither be reliable nor assurable in reality. Furthermore, we conduct a control performance analysis to show that the resultant performance can be regulated by adjusting the accuracy of approximate environment modeling. The efficiency of the proposed retrofit control is shown by numerical experiments on a network of secondorder oscillators.
Unified Statistical Channel Model for TurbulenceInduced Fading in Underwater Wireless Optical Communication Systems ; A unified statistical model is proposed to characterize turbulenceinduced fading in underwater wireless optical communication UWOC channels in the presence of air bubbles and temperature gradient for fresh and salty waters, based on experimental data. In this model, the channel irradiance fluctuations are characterized by the mixture ExponentialGeneralized Gamma EGG distribution. We use the expectation maximization EM algorithm to obtain the maximum likelihood parameter estimation of the new model. Interestingly, the proposed model is shown to provide a perfect fit with the measured data under all channel conditions for both types of water. The major advantage of the new model is that it has a simple mathematical form making it attractive from a performance analysis point of view. Indeed, we show that the application of the EGG model leads to closedform and analytically tractable expressions for key UWOC system performance metrics such as the outage probability, the average biterror rate, and the ergodic capacity. To the best of our knowledge, this is the firstever comprehensive channel model addressing the statistics of optical beam irradiance fluctuations in underwater wireless optical channels due to both air bubbles and temperature gradient.
Higher order selfdual models for spin3 particles in D21 ; In D21 dimensions, elementary particles of a given helicity can be described by local Lagrangians parity singlets. By means of a soldering procedure two opposite helicities can be joined together and give rise to massive spins particles carrying both helicities pm s parity doublets, such Lagrangians can also be used in D31 to describe massive spins particles. From this point of view the parity singlets selfdual models in D21 are the building blocks of real massive elementary particles in D31. In the three cases s1,, 32,, 2 there are 2s selfdual models of order 1,2, cdots, 2s in derivatives. In the spin3 case the 5th order model is missing in the literature. Here we deduce a 5th order spin3 selfdual model and fill up this gap. It is shown to be ghost free by means of a master action which relates it with the top model of 6th order. We believe that our approach can be generalized to arbitrary integer spins in order to obtain the models of order 2s and 2s1. We also comment on the difficulties in relating the 5th order model with their lower order duals.
A Model Parallel Proximal Stochastic Gradient Algorithm for Partially Asynchronous Systems ; Large models are prevalent in modern machine learning scenarios, including deep learning, recommender systems, etc., which can have millions or even billions of parameters. Parallel algorithms have become an essential solution technique to many largescale machine learning jobs. In this paper, we propose a model parallel proximal stochastic gradient algorithm, AsyBProxSGD, to deal with large models using model parallel blockwise updates while in the meantime handling a large amount of training data using proximal stochastic gradient descent ProxSGD. In our algorithm, worker nodes communicate with the parameter servers asynchronously, and each worker performs proximal stochastic gradient for only one block of model parameters during each iteration. Our proposed algorithm generalizes ProxSGD to the asynchronous and model parallel setting. We prove that AsyBProxSGD achieves a convergence rate of O1sqrtK to stationary points for nonconvex problems under emphconstant minibatch sizes, where K is the total number of block updates. This rate matches the bestknown rates of convergence for a wide range of gradientlike algorithms. Furthermore, we show that when the number of workers is bounded by OK14, we can expect AsyBProxSGD to achieve linear speedup as the number of workers increases. We implement the proposed algorithm on MXNet and demonstrate its convergence behavior and nearlinear speedup on a realworld dataset involving both a large model size and large amounts of data.
ScalaronHiggs inflation ; In scalaronHiggs inflation the Standard Model Higgs boson is nonminimally coupled to gravity and the EinsteinHilbert action is supplemented by the quadratic scalar curvature invariant. For the quartic Higgs selfcoupling lambda fixed at the electroweak scale, we find that the resulting inflationary twofield model effectively reduces to a single field model with the same predictions as in Higgs inflation or Starobinsky inflation, including the limit of a vanishing nonminimal coupling. For the same model, but with the scalar field a priori not identified with the Standard Model Higgs boson, we study the inflationary consequences of an extremely small lambda. Depending on the initial conditions for the inflationary background trajectories, we find that the twofield dynamics either again reduces to an effective singlefield model with a larger tensortoscalar ratio than predicted in Higgs inflation and Starobinsky inflation, or involves the full twofield dynamics and leads to oscillatory features in the inflationary power spectrum. Finally, we investigate under which conditions the inflationary scenario with extremely small lambda can be realized dynamically by the Standard Model renormalization group flow and discuss how the scalaronHiggs model can provide a natural way to stabilize the electroweak vacuum.
An HAMBased Analytic Modeling Methodology for Memristor Enabling Fast Convergence ; Memristor has great application prospects in various highperformance electronic systems, such as memory, artificial intelligence, and neural networks, due to its fast speed, nanoscale dimensions, and lowpower consumption. However, traditional nonanalytic models and lately reported analytic models for memristor have the problems of nonconvergence and slow convergence, respectively. These problems lay great obstacles in the analysis, simulation and design of memristor. To address these problems, a modeling methodology for analytic approximate solution of the state variable in memristor is proposed in this work. This methodology solves the governing equation of memristor by adopting the Homotopy Analysis Method HAM for the first time, and the convergence performance of the methodology is enhanced by an optimized convergencecontrol parameter in the HAM. The simulation results, compared with the reported analytic models, demonstrate that the HAMbased modeling methodology achieves faster convergence while guaranteeing sufficient accuracy. Based on the methodology, it can be simultaneously revealed that highly nonlinearity is the potential sources of slow convergence in analytic models, which is beneficial for the analysis and design guidance. In addition, a Spice subcircuit is constructed based on the obtained HAM model, and then it is integrated into an oscillator to verify its applicability. Due to the generality of HAM, this methodology may be easily extended to other memory devices.
The CEO Problem with rth Power of Difference and Logarithmic Distortions ; The CEO problem has received much attention since first introduced by Berger et al., but there are limited results on nonGaussian models with nonquadratic distortion measures. In this work, we extend the quadratic Gaussian CEO problem to two nonGaussian settings with general rth power of difference distortion. Assuming an identical observation channel across agents, we study the asymptotics of distortion decay as the number of agents and sumrate, Rsum, grow without bound, while individual rates vanish. The first setting is a regular sourceobservation model with rth power of difference distortion, which subsumes the quadratic Gaussian CEO problem, and we establish that the distortion decays at mathcalORsumr2 when r ge 2. We use sample median estimation after the BergerTung scheme for achievability. The other setting is a emphnonregular sourceobservation model, including uniform additive noise models, with rth power of difference distortion for which estimationtheoretic regularity conditions do not hold. The distortion decay mathcalORsumr when r ge 1 is obtained for the nonregular model by midrange estimator following the BergerTung scheme. We also provide converses based on the Shannon lower bound for the regular model and the ChazanZakaiZiv bound for the nonregular model, respectively. Lastly, we provide a sufficient condition for the regular model, under which quadratic and logarithmic distortions are asymptotically equivalent by an entropy power relationship as the number of agents grows. This proof relies on the Bernsteinvon Mises theorem.
BiasVariance Tradeoff and Model Selection for Proton Radius Extractions ; Intuitively, a scientist might assume that a more complex regression model will necessarily yield a better predictive model of experimental data. Herein, we disprove this notion in the context of extracting the proton charge radius from charge form factor data. Using a Monte Carlo study, we show that a simpler regression model can in certain cases be the better predictive model. This is especially true with noisy data where the complex model will fit the noise instead of the physical signal. Thus, in order to select the appropriate regression model to employ, a clear technique should be used such as the Akaike information criterion or Bayesian information criterion, and ideally selected previous to seeing the results. Also, to ensure a reasonable fit, the scientist should also make regression quality plots, such as residual plots, and not just rely on a single criterion such as reduced chi2. When we apply these techniques to low fourmomentum transfer cross section data, we find a proton radius that is consistent with the muonic Lamb shift results. While presented for the case of proton radius extraction, these concepts are applicable in general and can be used to illustrate the necessity of balancing bias and variance when building a regression model and validating results, ideas that are at the heart of modern machine learning algorithms.
Surrogate model for an alignedspin effective one body waveform model of binary neutron star inspirals using Gaussian process regression ; Fast and accurate waveform models are necessary for measuring the properties of inspiraling binary neutron star systems such as GW170817. We present a frequencydomain surrogate version of the alignedspin binary neutron star waveform model using the effective one body formalism known as SEOBNRv4T. This model includes the quadrupolar and octopolar adiabatic and dynamical tides. The version presented here is improved by the inclusion of the spininduced quadrupole moment effect, and completed by a prescription for tapering the end of the waveform to qualitatively reproduce numerical relativity simulations. The resulting model has 14 intrinsic parameters. We reduce its dimensionality by using universal relations that approximate all matter effects in terms of the leading quadrupolar tidal parameters. The implementation of the timedomain model can take up to an hour to evaluate using a starting frequency of 20Hz, and this is too slow for many parameter estimation codes that require O107 sequential waveform evaluations. We therefore construct a fast and faithful frequencydomain surrogate of this model using Gaussian process regression. The resulting surrogate has a maximum mismatch of 4.5times 104 for the Advanced LIGO detector, and requires 0.13s to evaluate for a waveform with a starting frequency of 20Hz. Finally, we perform an endtoend test of the surrogate with a set of parameter estimation runs, and find that the surrogate accurately recovers the parameters of injected waveforms.
Structural Supervision Improves Learning of NonLocal Grammatical Dependencies ; Stateoftheart LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of nonlocal grammatical dependencies with some success. Here we investigate whether supervision with hierarchical structure enhances learning of a range of grammatical dependencies, a question that has previously been addressed only for subjectverb agreement. Using controlled experimental methods from psycholinguistics, we compare the performance of wordbased LSTM models versus two models that represent hierarchical structure and deploy it in lefttoright processing Recurrent Neural Network Grammars RNNGs Dyer et al., 2016 and a incrementalized version of the ParsingasLanguageModeling configuration from Chariak et al., 2016. Models are tested on a diverse range of configurations for two classes of nonlocal grammatical dependencies in EnglishNegative Polarity licensing and FillerGap Dependencies. Using the same training data across models, we find that structurallysupervised models outperform the LSTM, with the RNNG demonstrating best results on both types of grammatical dependencies and even learning many of the Island Constraints on the fillergap dependency. Structural supervision thus provides data efficiency advantages over purely stringbased training of neural language models in acquiring humanlike generalizations about nonlocal grammatical dependencies.
Distributed deep learning for robust multisite segmentation of CT imaging after traumatic brain injury ; Machine learning models are becoming commonplace in the domain of medical imaging, and with these methods comes an everincreasing need for more data. However, to preserve patient anonymity it is frequently impractical or prohibited to transfer protected health information PHI between institutions. Additionally, due to the nature of some studies, there may not be a large public dataset available on which to train models. To address this conundrum, we analyze the efficacy of transferring the model itself in lieu of data between different sites. By doing so we accomplish two goals 1 the model gains access to training on a larger dataset that it could not normally obtain and 2 the model better generalizes, having trained on data from separate locations. In this paper, we implement multisite learning with disparate datasets from the National Institutes of Health NIH and Vanderbilt University Medical Center VUMC without compromising PHI. Three neural networks are trained to convergence on a computed tomography CT brain hematoma segmentation task one only with NIH data,one only with VUMC data, and one multisite model alternating between NIH and VUMC data. Resultant lesion masks with the multisite model attain an average Dice similarity coefficient of 0.64 and the automatically segmented hematoma volumes correlate to those done manually with a Pearson correlation coefficient of 0.87,corresponding to an 8 and 5 improvement, respectively, over the singlesite model counterparts.