text
stringlengths
62
2.94k
Detecting Online Hate Speech Approaches Using Weak Supervision and Network Embedding Models ; The ubiquity of social media has transformed online interactions among individuals. Despite positive effects, it has also allowed antisocial elements to unite in alternative social media environments eg. Gab.com like never before. Detecting such hateful speech using automated techniques can allow social media platforms to moderate their content and prevent nefarious activities like hate speech propagation. In this work, we propose a weak supervision deep learning model that i quantitatively uncover hateful users and ii present a novel qualitative analysis to uncover indirect hateful conversations. This model scores content on the interaction level, rather than the post or user level, and allows for characterization of users who most frequently participate in hateful conversations. We evaluate our model on 19.2M posts and show that our weak supervision model outperforms the baseline models in identifying indirect hateful interactions. We also analyze a multilayer network, constructed from two types of user interactions in Gabquote and reply and interaction scores from the weak supervision model as edge weights, to predict hateful users. We utilize the multilayer network embedding methods to generate features for the prediction task and we show that considering user context from multiple networks help achieving better predictions of hateful users in Gab. We receive up to 7 performance gain compared to single layer or homogeneous network embedding models.
Parameters of the SupernovaDriven Interstellar Turbulence ; Galactic dynamo models take as input certain parameters of the interstellar turbulence, most essentially the correlation time tau, rootmeansquare turbulent speed u, and correlation scale l. However, these quantities are difficult, or, in the case of tau, impossible, to directly observe, and theorists have mostly relied on order of magnitude estimates. Here we present an analytic model to derive these quantities in terms of a small set of more accessible parameters. In our model, turbulence is assumed to be driven concurrently by isolated supernovae SNe and superbubbles SBs, but clustering of SNe to form SBs can be turned off if desired, which reduces the number of model parameters by about half. In general, we find that isolated SNe and SBs can inject comparable amounts of turbulent energy into the interstellar medium, but SBs do so less efficiently. This results in rather low overall conversion rates of SN energy into turbulent energy of sim13. The results obtained for l, u and tau for model parameter values representative of the Solar neighbourhood are consistent with those determined from direct numerical simulations. Our analytic model can be combined with existing dynamo models to predict more directly the magnetic field properties for nearby galaxies or for statistical populations of galaxies in cosmological models.
An Indiaspecific Compartmental Model for Covid19 Projections and Intervention Strategies by Incorporating Geographical, Infrastructural and Response Heterogeneity ; We present a compartmental metapopulation model for the spread of Covid19 in India. Our model simulates populations at a district or state level using an epidemiological model that is appropriate to Covid19. Different districts are connected by a transportation matrix developed using available census data. We introduce uncertainties in the testing rates into the model that takes into account the disparate responses of the different states to the epidemic and also factors in the state of the public healthcare system. Our model allows us to generate qualitative projections of Covid19 spread in India, and further allows us to investigate the effects of different proposed interventions. By building in heterogeneity at geographical and infrastructural levels and in local responses, our model aims to capture some of the complexity of epidemiological modeling appropriate to a diverse country such as India.
Designing angleindependent structural colors using Monte Carlo simulations of multiple scattering ; Disordered nanostructures with correlations on the scale of visible wavelengths can show angleindependent structural colors. These materials could replace dyes in some applications because the color is tunable and resists photobleaching. However, designing nanostructures with a prescribed color is difficult, especially when the application cosmetics or displays, for example requires specific component materials. A general approach to solving this constrained design problem is modeling and optimization using a model that predicts the color of a given system, one optimizes the model parameters under constraints to achieve a target color. For this approach to work, the model must make accurate predictions, which is challenging because disordered nanostructures have multiple scattering. To address this challenge, we develop a Monte Carlo model that simulates multiple scattering of light in disordered arrangements of spherical particles or voids. The model produces quantitative agreement with measurements when we account for roughness on the surface of the film, particle polydispersity, and wavelengthdependent absorption in the components. Unlike discrete numerical simulations, our model is parameterized in terms of experimental variables, simplifying the connection between simulation and fabrication. To demonstrate this approach, we reproduce the color of the male mountain bluebird Sialia currucoides in an experimental system, using prescribed components and a microstructure that is easy to fabricate. Finally, we use the model to find the limits of angleindependent structural colors for a given system. These results enable an engineering design approach to structural color for many different applications.
Stratified Global MHD Models of Accretion Disks in SemiDetached Binaries ; We present results of the first global magnetohydrodynamic MHD simulations of accretion disks fed by Roche lobe overflow, including vertical stratification, in order to investigate the roles of spiral shocks, magnetorotational instability MRI, and the accretion stream on disk structure and evolution. Our models include a simple treatment of gas thermodynamics, with orbital Mach numbers at the inner edge of the disk Mrm in of 5 and 10. We find mass accretion rates to vary considerably on all time scales, with only the Mach 5 model reaching a clear quasistationary state. For Mach 10, the model undergoes an outsidein magneticallydriven accretion event occurring on a time scale of sim10 orbital periods of the binary. Both models exhibit spiral shocks inclined with respect to the binary plane, with their position and inclination changing rapidly. However, the timeaveraged location of these shocks in the equatorial plane is wellfit by simple linear models. MRI turbulence in the disk generates toroidal magnetic field patterns butterfly diagrams that are in some cases irregular, perhaps due to interaction with spiral structure. While many of our results are in good agreement with local studies, we find some features most notably those related to spiral shocks can only be captured in global models such as studied here. Thus, while global studies remain computationally expensive even as idealized models they are essential along with more sophisticated treatment of radiation transport and disk thermodynamics for furthering our understanding of accretion in binary systems.
Improving AMR Parsing with SequencetoSequence Pretraining ; In the literature, the research on abstract meaning representation AMR parsing is much restricted by the size of humancurated dataset which is critical to build an AMR parser with good performance. To alleviate such data size restriction, pretrained models have been drawing more and more attention in AMR parsing. However, previous pretrained models, like BERT, are implemented for general purpose which may not work as expected for the specific task of AMR parsing. In this paper, we focus on sequencetosequence seq2seq AMR parsing and propose a seq2seq pretraining approach to build pretrained models in both single and joint way on three relevant tasks, i.e., machine translation, syntactic parsing, and AMR parsing itself. Moreover, we extend the vanilla finetuning method to a multitask learning finetuning method that optimizes for the performance of AMR parsing while endeavors to preserve the response of pretrained models. Extensive experimental results on two English benchmark datasets show that both the single and joint pretrained models significantly improve the performance e.g., from 71.5 to 80.2 on AMR 2.0, which reaches the state of the art. The result is very encouraging since we achieve this with seq2seq models rather than complex models. We make our code and model available at httpsgithub.comxdqkidS2SAMRParser.
MetadataBased Detection of Child Sexual Abuse Material ; Child Sexual Abuse Media CSAM is any visual record of a sexuallyexplicit activity involving minors. CSAM impacts victims differently from the actual abuse because the distribution never ends, and images are permanent. Machine learningbased solutions can help law enforcement quickly identify CSAM and block digital distribution. However, collecting CSAM imagery to train machine learning models has many ethical and legal constraints, creating a barrier to research development. With such restrictions in place, the development of CSAM machine learning detection systems based on file metadata uncovers several opportunities. Metadata is not a record of a crime, and it does not have legal restrictions. Therefore, investing in detection systems based on metadata can increase the rate of discovery of CSAM and help thousands of victims. We propose a framework for training and evaluating deploymentready machine learning models for CSAM identification. Our framework provides guidelines to evaluate CSAM detection models against intelligent adversaries and models' performance with open data. We apply the proposed framework to the problem of CSAM detection based on file paths. In our experiments, the bestperforming model is based on convolutional neural networks and achieves an accuracy of 0.97. Our evaluation shows that the CNN model is robust against offenders actively trying to evade detection by evaluating the model against adversarially modified data. Experiments with open datasets confirm that the model generalizes well and is deploymentready.
Modelling of functional profiles and explainable shape shifts detection An approach combining the notion of the Frechet mean with the shape invariant model ; A modelling framework suitable for detecting shape shifts in functional profiles combining the notion of Fr'echet mean and the concept of deformation models is developed and proposed. The generalized mean sense offerred by the Fr'echet mean notion is employed to capture the typical pattern of the profiles under study, while the concept of deformation models, and in particular of the shape invariant model, allows for interpretable parameterizations of profile's deviations from the typical shape. EWMAtype control charts compatible with the functional nature of data and the employed deformation model are built and proposed, exploiting certain shape characteristics of the profiles under study with respect to the generalised mean sense, allowing for the identification of potential shifts concerning the shape andor the deformation process. Potential shifts in the shape deformation process, are further distinguished to significant shifts with respect to amplitude andor the phase of the profile under study. The proposed modelling and shift detection framework is implemented to a real world case study, where daily concentration profiles concerning air pollutants from an area in the city of Athens are modelled, while profiles indicating hazardous concentration levels are successfully identified in most of the cases.
Emulatorbased global sensitivity analysis for flowlike landslide runout models ; Landslide runout modeling involves various uncertainties originating from model input data. It is therefore desirable to assess the model's sensitivity. A global sensitivity analysis that is capable of exploring the entire input space and accounts for all interactions, often remains limited due to computational challenges resulting from a large number of necessary model runs. We address this research gap by integrating Gaussian process emulation into landslide runout modeling and apply it to the opensource simulation tool r.avaflow. The feasibility and efficiency of our approach is illustrated based on the 2017 Bondo landslide event. The sensitivity of aggregated model outputs, such as the apparent friction angle, impact area, as well as spatially resolved maximum flow height and velocity, to the dryCoulomb friction coefficient, turbulent friction coefficient and the release volume are studied. The results of firstorder effects are consistent with previous results of common oneatatime sensitivity analyses. In addition to that, our approach allows to rigorously investigate interactions. Strong interactions are detected on the margins of the flow path where the expectation and variation of maximum flow height and velocity are small. The interactions generally become weak with increasing variation of maximum flow height and velocity. Besides, there are stronger interactions between the two friction coefficients than between the release volume and each friction coefficient. In the future, it is promising to extend the approach for other computationally expensive tasks like uncertainty quantification, model calibration, and smart early warning.
Towards SelfRegulating AI Challenges and Opportunities of AI Model Governance in Financial Services ; AI systems have found a wide range of application areas in financial services. Their involvement in broader and increasingly critical decisions has escalated the need for compliance and effective model governance. Current governance practices have evolved from more traditional financial applications and modeling frameworks. They often struggle with the fundamental differences in AI characteristics such as uncertainty in the assumptions, and the lack of explicit programming. AI model governance frequently involves complex review flows and relies heavily on manual steps. As a result, it faces serious challenges in effectiveness, cost, complexity, and speed. Furthermore, the unprecedented rate of growth in the AI model complexity raises questions on the sustainability of the current practices. This paper focuses on the challenges of AI model governance in the financial services industry. As a part of the outlook, we present a systemlevel framework towards increased selfregulation for robustness and compliance. This approach aims to enable potential solution opportunities through increased automation and the integration of monitoring, management, and mitigation capabilities. The proposed framework also provides model governance and risk management improved capabilities to manage model risk during deployment.
Lambda Learner Fast Incremental Learning on Data Streams ; One of the most wellestablished applications of machine learning is in deciding what content to show website visitors. When observation data comes from highvelocity, usergenerated data streams, machine learning methods perform a balancing act between model complexity, training time, and computational costs. Furthermore, when model freshness is critical, the training of models becomes timeconstrained. Parallelized batch offline training, although horizontally scalable, is often not timeconsiderate or costeffective. In this paper, we propose Lambda Learner, a new framework for training models by incremental updates in response to minibatches from data streams. We show that the resulting model of our framework closely estimates a periodically updated model trained on offline data and outperforms it when model updates are timesensitive. We provide theoretical proof that the incremental learning updates improve the lossfunction over a stale batch model. We present a largescale deployment on the sponsored content platform for a large social network, serving hundreds of millions of users across different channels e.g., desktop, mobile. We address challenges and complexities from both algorithms and infrastructure perspectives, and illustrate the system details for computation, storage, and streaming production of training data.
Graph Deep Factors for Forecasting ; Deep probabilistic forecasting techniques have recently been proposed for modeling large collections of timeseries. However, these techniques explicitly assume either complete independence local model or complete dependence global model between timeseries in the collection. This corresponds to the two extreme cases where every timeseries is disconnected from every other timeseries in the collection or likewise, that every timeseries is related to every other timeseries resulting in a completely connected graph. In this work, we propose a deep hybrid probabilistic graphbased forecasting framework called Graph Deep Factors GraphDF that goes beyond these two extremes by allowing nodes and their timeseries to be connected to others in an arbitrary fashion. GraphDF is a hybrid forecasting framework that consists of a relational global and relational local model. In particular, we propose a relational global model that learns complex nonlinear timeseries patterns globally using the structure of the graph to improve both forecasting accuracy and computational efficiency. Similarly, instead of modeling every timeseries independently, we learn a relational local model that not only considers its individual timeseries but also the timeseries of nodes that are connected in the graph. The experiments demonstrate the effectiveness of the proposed deep hybrid graphbased forecasting model compared to the stateoftheart methods in terms of its forecasting accuracy, runtime, and scalability. Our case study reveals that GraphDF can successfully generate cloud usage forecasts and opportunistically schedule workloads to increase cloud cluster utilization by 47.5 on average.
An Approach to Evaluating Learning Algorithms for Decision Trees ; Learning algorithms produce software models for realising critical classification tasks. Decision trees models are simpler than other models such as neural network and they are used in various critical domains such as the medical and the aeronautics. Low or unknown learning ability algorithms does not permit us to trust the produced software models, which lead to costly test activities for validating the models and to the waste of learning time in case the models are likely to be faulty due to the learning inability. Methods for evaluating the decision trees learning ability, as well as that for the other models, are needed especially since the testing of the learned models is still a hot topic. We propose a novel oraclecentered approach to evaluate the learning ability of learning algorithms for decision trees. It consists of generating data from reference trees playing the role of oracles, producing learned trees with existing learning algorithms, and determining the degree of correctness DOE of the learned trees by comparing them with the oracles. The average DOE is used to estimate the quality of the learning algorithm. the We assess five decision tree learning algorithms based on the proposed approach.
Gradient discretization of twophase poromechanical models with discontinuous pressures at matrix fracture interfaces ; We consider a twophase Darcy flow in a fractured and deformable porous medium for which the fractures are described as a network of planar surfaces leading to socalled hybriddimensional models. The fractures are assumed open and filled by the fluids and small deformations with a linear elastic constitutive law are considered in the matrix. As opposed to 10, the phase pressures are not assumed continuous at matrix fracture interfaces, which raises new challenges in the convergence analysis related to the additional interfacial equations and unknowns for the flow. As shown in 16, 2, unlike single phase flow, discontinuous pressure models for twophase flows provide a better accuracy than continuous pressure models even for highly permeable fractures. This is due to the fact that fractures fully filled by one phase can act as barriers for the other phase, resulting in a pressure discontinuity at the matrix fracture interface. The model is discretized using the gradient discretization method 22, which covers a large class of conforming and non conforming schemes. This framework allows for a generic convergence analysis of the coupled model using a combination of discrete functional tools. In this work, the gradient discretization of 10 is extended to the discontinuous pressure model and the convergence to a weak solution is proved. Numerical solutions provided by the continuous and discontinuous pressure models are compared on gas injection and suction test cases using a TwoPoint Flux Approximation TPFA finite volume scheme for the flows and P2 finite elements for the mechanics.
Bianchi II and VIIh0 Models Revisited Via The EuclideanSignature Semi Classical Method ; We apply in a novel fashion a modified semiclassical method to the Bianchi II and VIIh0 models when a cosmological constant, aligned electromagnetic field and stiff matter are present. Additionally we study the noncommutative quantum Bianchi II models when an aligned electromagnetic field is included. Through the use of the Euclideansignature semi classical method we find a plethora of new solutions to these model's corresponding Lorentzian signature Wheeler DeWitt equations which we can interpret qualitatively. These new solutions for the aforementioned models involving matter sources reveal some potentially interesting effects that should be chronicled as possible phenomena that a toy model of quantum gravity can induce on the evolution of a quantum universe. Furthermore we find 'excited' states which behave differently from the 'excited' states of the Bianchi IX and Taub models that were previously uncovered using this method. By comparing and contrasting the 'excited' states given by these models we help facilitate a better understating of what constitutes an 'excited' state solution of the Wheeler Dewitt equation. Our results further show the utility of the Euclideansignature semi classical method for tackling Lorentzian signature problems without having to invoke a Wick rotation. This feature of not needing to apply a Wick rotation makes this method potentially very useful for tackling a variety of problems in bosonic relativistic field theory and quantum gravity.
Enhance Gender and Identity Preservation in Face Aging Simulation for Infants and Toddlers ; Realistic ageprogressed photos provide invaluable biometric information in a wide range of applications. In recent years, deep learningbased approaches have made remarkable progress in modeling the aging process of the human face. Nevertheless, it remains a challenging task to generate accurate ageprogressed faces from infant or toddler photos. In particular, the lack of visually detectable gender characteristics and the drastic appearance changes in early life contribute to the difficulty of the task. We propose a new deep learning method inspired by the successful Conditional Adversarial Autoencoder CAAE, 2017 model. In our approach, we extend the CAAE architecture to 1 incorporate gender information, and 2 augment the model's overall architecture with an identitypreserving component based on facial features. We trained our model using the publicly available UTKFace dataset and evaluated our model by simulating up to 100 years of aging on 1,156 male and 1,207 female infant and toddler face photos. Compared to the CAAE approach, our new model demonstrates noticeable visual improvements. Quantitatively, our model exhibits an overall gain of 77.0 male and 13.8 female in gender fidelity measured by a gender classifier for the simulated photos across the age spectrum. Our model also demonstrates a 22.4 gain in identity preservation measured by a facial recognition neural network.
Super Interacting Dark Sector An Improvement on SelfInteracting Dark Matter via Scaling Relations of Galaxy Clusters ; Selfinteracting dark matter is known as one of the most appropriate candidates for dark matter. Due to its excellent success in removing many astrophysical problems, particularly in small scale structure, studying this model has taken on added significance. In this paper, we focus on the results of two previously performed simulations of cluster sized halos with selfinteracting dark matter and introduce a new function for the density profile of galaxy clusters, which can perfectly describe the result of these simulations. This density profile helps to find a velocity dispersion profile and also a relation between cluster mass and concentration parameter. Using these relations, we investigate two scaling relations of galaxy clusters, namely massvelocity dispersion and masstemperature relations. The scaling relations reveal that in the selfinteracting dark matter model, halos are more massive than what the standard noninteracting model predicts for any fixed temperature. We also study the masstemperature relation for a hybrid interacting model, which is a combination of selfinteracting dark matter idea with another model of the dark sector in which dark matter particle mass is determined according to its interaction with dark energy. This super interacting dark sector SIDS model can change the masstemperature relation to a modified form that has the same result as a noninteracting model. Finally, we provide quantitative expressions which can describe the constants of this interacting model with the value of crosssection per unit mass of dark matter particles.
Can nonlinear parametric oscillators solve random Ising models ; We study large networks of parametric oscillators as heuristic solvers of random Ising models. In these networks, known as coherent Ising machines, the model to be solved is encoded in the coupling between the oscillators, and a solution is offered by the steady state of the network. This approach relies on the assumption that mode competition steers the network to the groundstate solution of the Ising model. By considering a broad family of frustrated Ising models, we show that the mostefficient mode does not correspond generically to the ground state of the Ising model. We infer that networks of parametric oscillators close to threshold are intrinsically not Ising solvers. Nevertheless, the network can find the correct solution if the oscillators are driven sufficiently above threshold, in a regime where nonlinearities play a predominant role. We find that for all probed instances of the model, the network converges to the ground state of the Ising model with a finite probability.
Spatiotemporal Imaging with Diffeomorphic Optimal Transportation ; We propose a variational model with diffeomorphic optimal transportation for joint image reconstruction and motion estimation. The proposed model is a production of assembling the Wasserstein distance with the BenamouBrenier formula in optimal transportation and the flow of diffeomorphisms involved in large deformation diffeomorphic metric mapping, which is suitable for the scenario of spatiotemporal imaging with large diffeomorphic and masspreserving deformations. Specifically, we first use the BenamouBrenier formula to characterize the optimal transport cost among the flow of masspreserving images, and restrict the velocity field into the admissible Hilbert space to guarantee the generated deformation flow being diffeomorphic. We then gain the ODEconstrained equivalent formulation for BenamouBrenier formula. We finally obtain the proposed model with ODE constraint following the framework that presented in our previous work. We further get the equivalent PDEconstrained optimal control formulation. The proposed model is compared against several existing alternatives theoretically. The alternating minimization algorithm is presented for solving the timediscretized version of the proposed model with ODE constraint. Several important issues on the proposed model and associated algorithms are also discussed. Particularly, we present several potential models based on the proposed diffeomorphic optimal transportation. Under appropriate conditions, the proposed algorithm also provides a new scheme to solve the models using quadratic Wasserstein distance. The performance is finally evaluated by several numerical experiments in spacetime tomography, where the data is measured from the concerned sequential images with sparse views andor various noise levels.
How Can We Know When Language Models Know On the Calibration of Language Models for Question Answering ; Recent works have shown that language models LM capture different types of knowledge regarding facts or common sense. However, because no model is perfect, they still fail to provide appropriate answers in many cases. In this paper, we ask the question how can we know when language models know, with confidence, the answer to a particular query We examine this question from the point of view of calibration, the property of a probabilistic model's predicted probabilities actually being well correlated with the probabilities of correctness. We examine three strong generative models T5, BART, and GPT2 and study whether their probabilities on QA tasks are well calibrated, finding the answer is a relatively emphatic no. We then examine methods to calibrate such models to make their confidence scores correlate better with the likelihood of correctness through finetuning, posthoc probability modification, or adjustment of the predicted outputs or inputs. Experiments on a diverse range of datasets demonstrate the effectiveness of our methods. We also perform analysis to study the strengths and limitations of these methods, shedding light on further improvements that may be made in methods for calibrating LMs. We have released the code at httpsgithub.comjzbjyblmcalibration.
A Bimodal Weibull Distribution Properties and Inference ; Modeling is a challenging topic and using parametric models is an important stage to reach flexible function for modeling. Weibull distribution has two parameters which are shape alpha and scale beta. In this study, bimodality parameter is added and so bimodal Weibull distribution is proposed by using a quadratic transformation technique used to generate bimodal functions produced due to using the quadratic expression. The analytical simplicity of Weibull and quadratic form give an advantage to derive a bimodal Weibull via constructing normalizing constant. The characteristics and properties of the proposed distribution are examined to show its usability in modeling. After examination as first stage in modeling issue, it is appropriate to use bimodal Weibull for modeling data sets. Two estimation methods which are maximum logq likelihood and its special form including objective functions logqf and logf are used to estimate the parameters of shape, scale and bimodality parameters of the function. The second stage in modeling is overcome by using heuristic algorithm for optimization of function according to parameters due to fact that converging to global point of objective function is performed by heuristic algorithm based on the stochastic optimization. Real data sets are provided to show the modeling competence of the proposed distribution.
Channel Effects on Surrogate Models of Adversarial Attacks against Wireless Signal Classifiers ; We consider a wireless communication system that consists of a background emitter, a transmitter, and an adversary. The transmitter is equipped with a deep neural network DNN classifier for detecting the ongoing transmissions from the background emitter and transmits a signal if the spectrum is idle. Concurrently, the adversary trains its own DNN classifier as the surrogate model by observing the spectrum to detect the ongoing transmissions of the background emitter and generate adversarial attacks to fool the transmitter into misclassifying the channel as idle. This surrogate model may differ from the transmitter's classifier significantly because the adversary and the transmitter experience different channels from the background emitter and therefore their classifiers are trained with different distributions of inputs. This system model may represent a setting where the background emitter is a primary user, the transmitter is a secondary user, and the adversary is trying to fool the secondary user to transmit even though the channel is occupied by the primary user. We consider different topologies to investigate how different surrogate models that are trained by the adversary depending on the differences in channel effects experienced by the adversary affect the performance of the adversarial attack. The simulation results show that the surrogate models that are trained with different distributions of channelinduced inputs severely limit the attack performance and indicate that the transferability of adversarial attacks is neither readily available nor straightforward to achieve since surrogate models for wireless applications may significantly differ from the target model depending on channel effects.
A Mechanical System Inspired Microscopic Traffic Model Modeling, Analysis, and Validation ; In this paper, we develop a mechanical system inspired microscopic traffic model to characterize the longitudinal interaction dynamics among a chain of vehicles. In particular, we extend our prior work on massspringdamperclutch based carfollowing model between two vehicles to multivehicle scenario. This model can naturally capture the driver's tendency to maintain the same speed as the vehicle ahead while keeping a speeddependent desired spacing. It is also capable of characterizing the impact of the following vehicle on the preceding vehicle, which is generally neglected in existing models. A new string stability criterion is defined for the considered multivehicle dynamics, and stability analysis is performed on the system parameters and time delays. An efficient online parameter identification algorithm, sequential recursive least squares with inverse QR decomposition SRLSIQR, is developed to estimate the drivingrelated model parameters. These realtime estimated parameters can be employed in advanced longitudinal control systems to enable accurate prediction of vehicle trajectories for improved safety and fuel efficiency. The proposed model and the parameter identification algorithm are validated on NGSIM, a naturalistic driving dataset, as well as our own connected vehicle driving data. Promising performance is demonstrated.
Label Confusion Learning to Enhance Text Classification Models ; Representing a true label as a onehot vector is a common practice in training text classification models. However, the onehot representation may not adequately reflect the relation between the instances and labels, as labels are often not completely independent and instances may relate to multiple labels in practice. The inadequate onehot representations tend to train the model to be overconfident, which may result in arbitrary prediction and model overfitting, especially for confused datasets datasets with very similar labels or noisy datasets datasets with labeling errors. While training models with label smoothing LS can ease this problem in some degree, it still fails to capture the realistic relation among labels. In this paper, we propose a novel Label Confusion Model LCM as an enhancement component to current popular text classification models. LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instances and labels during training and generate a better label distribution to replace the original onehot label vector, thus improving the final classification performance. Extensive experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models. Further experiments also verify that LCM is especially helpful for confused or noisy datasets and superior to the label smoothing method.
On Shapley Credit Allocation for Interpretability ; We emphasize the importance of asking the right question when interpreting the decisions of a learning model. We discuss a natural extension of the theoretical machinery from Janzing et. al. 2020, which answers the question Why did my model predict a person has cancer for answering a more involved question, What caused my model to predict a person has cancer While the former quantifies the direct effects of variables on the model, the latter also accounts for indirect effects, thereby providing meaningful insights wherever human beings can reason in terms of cause and effect. We propose three broad categories for interpretations observational, modelspecific and causal each of which are significant in their own right. Furthermore, this paper quantifies feature relevance by weaving different natures of interpretations together with different measures as characteristic functions for Shapley symmetrization. Besides the widely used expected value of the model, we also discuss measures of statistical uncertainty and dispersion as informative candidates, and their merits in generating explanations for each data point, some of which are used in this context for the first time. These measures are not only useful for studying the influence of variables on the model output, but also on the predictive performance of the model, and for that we propose relevant characteristic functions that are also used for the first time.
Beyond Occam's Razor in System Identification DoubleDescent when Modeling Dynamics ; System identification aims to build models of dynamical systems from data. Traditionally, choosing the model requires the designer to balance between two goals of conflicting nature; the model must be rich enough to capture the system dynamics, but not so flexible that it learns spurious random effects from the dataset. It is typically observed that the model validation performance follows a Ushaped curve as the model complexity increases. Recent developments in machine learning and statistics, however, have observed situations where a doubledescent curve subsumes this Ushaped modelperformance curve. With a second decrease in performance occurring beyond the point where the model has reached the capacity of interpolating i.e., near perfectly fitting the training data. To the best of our knowledge, such phenomena have not been studied within the context of dynamic systems. The present paper aims to answer the question Can such a phenomenon also be observed when estimating parameters of dynamic systems We show that the answer is yes, verifying such behavior experimentally both for artificially generated and realworld datasets.
Identification of Latent Variables From Graphical Model Residuals ; Graphbased causal discovery methods aim to capture conditional independencies consistent with the observed data and differentiate causal relationships from indirect or induced ones. Successful construction of graphical models of data depends on the assumption of causal sufficiency that is, that all confounding variables are measured. When this assumption is not met, learned graphical structures may become arbitrarily incorrect and effects implied by such models may be wrongly attributed, carry the wrong magnitude, or misrepresent direction of correlation. Wide application of graphical models to increasingly less curated big data draws renewed attention to the unobserved confounder problem. We present a novel method that aims to control for the latent space when estimating a DAG by iteratively deriving proxies for the latent space from the residuals of the inferred model. Under mild assumptions, our method improves structural inference of Gaussian graphical models and enhances identifiability of the causal effect. In addition, when the model is being used to predict outcomes, it unconfounds the coefficients on the parents of the outcomes and leads to improved predictive performance when outofsample regime is very different from the training data. We show that any improvement of prediction of an outcome is intrinsically capped and cannot rise beyond a certain limit as compared to the confounded model. We extend our methodology beyond GGMs to ordinal variables and nonlinear cases. Our R package provides both PCA and autoencoder implementations of the methodology, suitable for GGMs with some guarantees and for better performance in general cases but without such guarantees.
Modeling and Detecting Communities in Node Attributed Networks ; As a fundamental structure in realworld networks, in addition to graph topology, communities can also be reflected by abundant node attributes. In attributed community detection, probabilistic generative models PGMs have become the mainstream method due to their principled characterization and competitive performances. Here, we propose a novel PGM without imposing any distributional assumptions on attributes, which is superior to the existing PGMs that require attributes to be categorical or Gaussian distributed. Based on the block model of graph structure, our model incorporates the attribute by describing its effect on node popularity. To characterize the effect quantitatively, we analyze the community detectability for our model and then establish the requirements of the node popularity term. This leads to a new scheme for the crucial model selection problem in choosing and solving attributed community detection models. With the model determined, an efficient algorithm is developed to estimate the parameters and to infer the communities. The proposed method is validated from two aspects. First, the effectiveness of our algorithm is theoretically guaranteed by the detectability condition. Second, extensive experiments indicate that our method not only outperforms the competing approaches on the employed datasets, but also shows better applicability to networks with various node attributes.
A consistent and conservative model and its scheme for NphaseMcomponent incompressible flows ; In the present work, we propose a consistent and conservative model for multiphase and multicomponent incompressible flows, where there can be arbitrary numbers of phases and components. Each phase has a background fluid called the pure phase, each pair of phases is immiscible, and components are dissolvable in some specific phases. The model is developed based on the multiphase PhaseField model including the contact angle boundary condition, the diffuse domain approach, and the analyses on the proposed consistency conditions for multiphase and multicomponent flows. The model conserves the mass of individual pure phases, the amount of each component in its dissolvable region, and thus the mass of the fluid mixture, and the momentum of the flow. It ensures that no fictitious phases or components can be generated and that the summation of the volume fractions from the PhaseField model is unity everywhere so that there is no local void or overfilling. It satisfies a physical energy law and it is Galilean invariant. A corresponding numerical scheme is developed for the proposed model, whose formal accuracy is 2ndorder in both time and space. It is shown to be consistent and conservative and its solution is demonstrated to preserve the Galilean invariance and energy law. Numerical tests indicate that the proposed model and scheme are effective and robust to study various challenging multiphase and multicomponent flows.
Sequential Bayesian Risk Set Inference for Robust Discrete Optimization via Simulation ; Optimization via simulation OvS procedures that assume the simulation inputs are generated from the realworld distributions are subject to the risk of selecting a suboptimal solution when the distributions are substituted with input models estimated from finite realworld data known as input model risk. Focusing on discrete OvS, this paper proposes a new Bayesian framework for analyzing input model risk of implementing an arbitrary solution, x, where uncertainty about the input models is captured by a posterior distribution. We define the alphalevel risk set of solution x as the set of solutions whose expected performance is better than x by a practically meaningful margin delta given common input models with significant probability alpha under the posterior distribution. The userspecified parameters, delta and alpha, control robustness of the procedure to the desired level as well as guards against unnecessary conservatism. An empty risk set implies that there is no practically better solution than x with significant probability even though the realworld input distributions are unknown. For efficient estimation of the risk set, the conditional mean performance of a solution given a set of input distributions is modeled as a Gaussian process GP that takes the solutiondistributions pair as an input. In particular, our GP model allows both parametric and nonparametric input models. We propose the sequential risk set inference procedure that estimates the risk set and selects the next solutiondistributions pair to simulate using the posterior GP at each iteration. We show that simulating the pair expected to change the risk set estimate the most in the next iteration is the asymptotic onestep optimal sampling rule that minimizes the number of incorrectly classified solutions, if the procedure runs without stopping.
FWBNetFront White Balance Network for Color Shift Correction in Single Image Dehazing via Atmospheric Light Estimation ; In recent years, single image dehazing deep models based on Atmospheric Scattering Model ASM have achieved remarkable results. But the dehazing outputs of those models suffer from color shift. Analyzing the ASM model shows that the atmospheric light factor ALF is set as a scalar which indicates ALF is constant for whole image. However, for images taken in realworld, the illumination is not uniformly distributed over whole image which brings model mismatch and possibly results in color shift of the deep models using ASM. Bearing this in mind, in this study, first, a new nonhomogeneous atmospheric scattering model NHASM is proposed for improving image modeling of hazy images taken under complex illumination conditions. Second, a new UNet based front white balance module FWBModule is dedicatedly designed to correct color shift before generating dehazing result via atmospheric light estimation. Third, a new FWB loss is innovatively developed for training FWBModule, which imposes penalty on color shift. In the end, based on NHASM and front white balance technology, an endtoend CNNbased colorshiftrestraining dehazing network is developed, termed as FWBNet. Experimental results demonstrate the effectiveness and superiority of our proposed FWBNet for dehazing on both synthetic and realworld images.
Towards ExpectationMaximization by SQL in RDBMS ; Integrating machine learning techniques into RDBMSs is an important task since there are many real applications that require modeling e.g., business intelligence, strategic analysis as well as querying data in RDBMSs. In this paper, we provide an SQL solution that has the potential to support different machine learning modelings. As an example, we study how to support unsupervised probabilistic modeling, that has a wide range of applications in clustering, density estimation and data summarization, and focus on ExpectationMaximization EM algorithms, which is a general technique for finding maximum likelihood estimators. To train a model by EM, it needs to update the model parameters by an Estep and an Mstep in a whileloop iteratively until it converges to a level controled by some threshold or repeats a certain number of iterations. To support EM in RDBMSs, we show our answers to the matrixvectors representations in RDBMSs, the relational algebra operations to support the linear algebra operations required by EM, parameters update by relational algebra, and the support of a whileloop. It is important to note that the SQL'99 recursion cannot be used to handle such a whileloop since the Mstep is nonmonotonic. In addition, assume that a model has been trained by an EM algorithm, we further design an automatic indatabase model maintenance mechanism to maintain the model when the underlying training data changes.We have conducted experimental studies and will report our findings in this paper.
Streaming Models for Joint Speech Recognition and Translation ; Using endtoend models for speech translation ST has increasingly been the focus of the ST community. These models condense the previously cascaded systems by directly converting sound waves into translated text. However, cascaded models have the advantage of including automatic speech recognition output, useful for a variety of practical ST systems that often display transcripts to the user alongside the translations. To bridge this gap, recent work has shown initial progress into the feasibility for endtoend models to produce both of these outputs. However, all previous work has only looked at this problem from the consecutive perspective, leaving uncertainty on whether these approaches are effective in the more challenging streaming setting. We develop an endtoend streaming ST model based on a retranslation approach and compare against standard cascading approaches. We also introduce a novel inference method for the joint case, interleaving both transcript and translation in generation and removing the need to use separate decoders. Our evaluation across a range of metrics capturing accuracy, latency, and consistency shows that our endtoend models are statistically similar to cascading models, while having half the number of parameters. We also find that both systems provide strong translation quality at low latency, keeping 99 of consecutive quality at a lag of just under a second.
Large Eddy Simulation of a Premixed Bunsen flame using a modified ThickenedFlame model at two Reynolds number ; A modified Thickened Flame TF model based on Large Eddy Simulation methodology is used to investigate premixed combustion and the model predictions are evaluated by comparing with the piloted premixed stoichiometric methaneair flame data Chen et al., 1996 for Reynolds numbers Re 24,000 flame F3 and Re52,000 flame F1. The basic idea of ThickenedFlame approach is that the flame front is artificially thickened to resolve on the computational LES grid while keeping the laminar flame speed constant. The artificially thickening of the flame front is obtained by enhancing the molecular diffusion and decreasing the preexponential factor of the Arrhenius law. Since the flame front is artificially thickened, the response of the thickened flame to turbulence is affected and taken care of by incorporating an efficiency function E in the governing equations. The efficiency function E in the modified TF model is proposed based on the direct numerical simulations DNS data set of flamevortex interactions Colin et al., 2000. The predicted simulation results are compared with the experimental data and with computations reported using RANS based probability distribution function PDF modeling approach Lindstedt, R. P. and Vaos, E. M., 2006, Transported PDF modeling of highReynoldsnumber premixed turbulent flames, Combustion and Flame, 145, 495 and RANS based Gequation approach Herrmann, M., 2006. It is shown that the results with the modified TF model are generally in good agreement with the data, with the TF predictions consistently comparable to the PDF model predictions and superior to the results with the Gequation approach.
Supervised quantum machine learning models are kernel methods ; With nearterm quantum devices available and the race for faulttolerant quantum computers in full swing, researchers became interested in the question of what happens if we replace a supervised machine learning model with a quantum circuit. While such quantum models are sometimes called quantum neural networks, it has been repeatedly noted that their mathematical structure is actually much more closely related to kernel methods they analyse data in highdimensional Hilbert spaces to which we only have access through inner products revealed by measurements. This technical manuscript summarises and extends the idea of systematically rephrasing supervised quantum models as a kernel method. With this, a lot of nearterm and faulttolerant quantum models can be replaced by a general support vector machine whose kernel computes distances between dataencoding quantum states. Kernelbased training is then guaranteed to find better or equally good quantum models than variational circuit training. Overall, the kernel perspective of quantum machine learning tells us that the way that data is encoded into quantum states is the main ingredient that can potentially set quantum models apart from classical machine learning models.
Skin markerbased subjectspecific spinal alignment modeling A feasibility study ; Musculoskeletal models have the potential to improve diagnosis and optimize clinical treatment by predicting accurate outcomes on an individual basis. However, the subjectspecific modeling of spinal alignment is often strongly simplified or is based on radiographic assessments, exposing subjects to unnecessary radiation. We therefore developed a novel skin markerbased approach for modeling subjectspecific spinal alignment and evaluated its feasibility by comparing the predicted with the actual intervertebral joint IVJ locationsorientations ground truth using lateralview radiographic images. Moreover, the predictive performance of the subjectspecific models was evaluated by comparing the predicted L1L2 spinal loads during various functional activities with in vivo measured data obtained from the OrthoLoad database. IVJ locationsorientations were predicted closer to ground truth as opposed to standard model scaling, with average location prediction errors of 0.990.68 cm on the frontal and 1.210.97 cm on the transverse axis as well as an average orientation prediction error of 4.74deg2.80deg. Simulated spinal loads showed similar curve patterns but considerably larger values as compared to in vivo measured data. Differences in spinal loads between generic and subjectspecific models become only apparent on an individual subject level. These results underline the feasibility of the proposed method and associated workflow for inter and intrasubject investigations using musculoskeletal simulations. When implemented into standard model scaling workflows, it is expected to improve the accuracy of muscle activity and joint loading simulations, which is crucial for investigations of treatment effects or pathologydependent deviations.
Delay differential equations for the spatiallyresolved simulation of epidemics with specific application to COVID19 ; In the wake of the 2020 COVID19 epidemic, much work has been performed on the development of mathematical models for the simulation of the epidemic, and of disease models generally. Most works follow the susceptibleinfectedremoved SIR compartmental framework, modeling the epidemic with a system of ordinary differential equations. Alternative formulations using a partial differential equation PDE to incorporate both spatial and temporal resolution have also been introduced, with their numerical results showing potentially powerful descriptive and predictive capacity. In the present work, we introduce a new variation to such models by using delay differential equations DDEs. The dynamics of many infectious diseases, including COVID19, exhibit delays due to incubation periods and related phenomena. Accordingly, DDE models allow for a natural representation of the problem dynamics, in addition to offering advantages in terms of computational time and modeling, as they eliminate the need for additional, difficulttoestimate, compartments such as exposed individuals to incorporate time delays. Here, we introduce a DDE epidemic model in both an ordinary and partial differential equation framework. We present a series of mathematical results assessing the stability of the formulation. We then perform several numerical experiments, validating both the mathematical results and establishing model's ability to reproduce measured data on realistic problems.
Travelling waves, blowup and extinction in the FisherStefan model ; While there is a long history of employing moving boundary problems in physics, in particular via Stefan problems for heat conduction accompanied by a change of phase, more recently such approaches have been adapted to study biological invasion. For example, when a logistic growth term is added to the governing partial differential equation in a Stefan problem, one arrives at the FisherStefan model, a generalisation of the wellknown FisherKPP model, characterised by a leakage coefficient kappa which relates the speed of the moving boundary to the flux of population there. This FisherStefan model overcomes one of the wellknown limitations of the FisherKPP model, since timedependent solutions of the FisherStefan model involve a welldefined front with compact support which is more natural in terms of mathematical modelling. Almost all of the existing analysis of the standard FisherStefan model involves setting kappa 0, which can lead to either invading travelling wave solutions or complete extinction of the population. Here, we demonstrate how setting kappa 0 leads to retreating travelling waves and an interesting transition to finitetime blowup. For certain initial conditions, population extinction is also observed. Our approach involves studying timedependent solutions of the governing equations, phase plane and asymptotic analysis, leading to new insight into the possibilities of travelling waves, blowup and extinction for this moving boundary problem. Matlab software used to generate the results in this work are available on Github.
An electromechanically coupled beam model for dielectric elastomer actuators ; In this work, the Cosserat formulation of geometrically exact beam dynamics is extended by adding the electric potential as an additional degree of freedom to account for the electromechanical coupling in the Dielectric Elastomer Actuators DEAs. To be able to generate complex beam deformations via dielectric actuator, a linear distribution of electric potential on the beam cross section is proposed. Based on this electric potential, the electric field and the strainlike electrical variable are defined for the beam, where the strainlike electrical variable is workconjugated to the electric displacement. The electromechanically coupled strain energy for the beam is derived consistently from continuum electromechanics, which leads to the direct application of the material models in the continuum to the beam model. The electromechanically coupled problem in beam dynamics is first spatially semidiscretized by 1D finite elements and then solved via variational time integration. By applying different electrical boundary conditions, different deformations of the beam are obtained in the numerical examples, including contraction, shear, bending and torsion. The damping effect induced by the viscosity as well as the total energy of the beam are evaluated. The deformations of the electromechanically coupled beam model are compared with the results of the 3D finite element model, where a good agreement of the deformations in the beam model and that in the 3D finite element model is observed. However, less degrees of freedom are required to resolve the complex deformations in the beam model.
Integrability vs. RG flow in G times G and G times G H sigma models ; We consider a class of 2d sigmamodels on products of group spaces that provide new examples of a close connection between integrability and stability under the RG flow. We first study the integrable G times G model derived from the affine Gaudin construction for which the 1loop betafunctions were found in arXiv2010.07879 and show that its condition of integrability is preserved also by the 2loop RG flow. We then investigate the RG flow in the gauged G times G H model, in particular the integrable T1,1 model found in arXiv2010.05573. We also construct a new class of integrable G times G H models in the case when the subgroup H is abelian. In the simplest case of GSU2, HU1 this leads to an integrable sigmamodel on the T1,q space with a particular Bfield. This model is also shown to be stable under the 2loop RG flow, and we relate this property to its invariance under Tduality in an isometric U1 direction. This T1,q model may be interpreted as an integrable deformation of the GMM model of two coupled WZW theories with generic levels away from the conformal point.
Designing a Practical Degradation Model for Deep Blind Image SuperResolution ; It is widely acknowledged that single image superresolution SISR methods would not perform well if the assumed degradation model deviates from those in real images. Although several degradation models take additional factors into consideration, such as blur, they are still not effective enough to cover the diverse degradations of real images. To address this issue, this paper proposes to design a more complex but practical degradation model that consists of randomly shuffled blur, downsampling and noise degradations. Specifically, the blur is approximated by two convolutions with isotropic and anisotropic Gaussian kernels; the downsampling is randomly chosen from nearest, bilinear and bicubic interpolations; the noise is synthesized by adding Gaussian noise with different noise levels, adopting JPEG compression with different quality factors, and generating processed camera sensor noise via reverseforward camera image signal processing ISP pipeline model and RAW image noise model. To verify the effectiveness of the new degradation model, we have trained a deep blind ESRGAN superresolver and then applied it to superresolve both synthetic and real images with diverse degradations. The experimental results demonstrate that the new degradation model can help to significantly improve the practicability of deep superresolvers, thus providing a powerful alternative solution for real SISR applications.
Generating and Evaluating Explanations of Attended and ErrorInducing Input Regions for VQA Models ; Attention maps, a popular heatmapbased explanation method for Visual Question Answering VQA, are supposed to help users understand the model by highlighting portions of the imagequestion used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30 and that our proxy helpfulness metrics correlate strongly rho0.97 with how well users can predict model correctness.
An active inference model of collective intelligence ; To date, formal models of collective intelligence have lacked a plausible mathematical description of the relationship between localscale interactions between highly autonomous subsystem components individuals and globalscale behavior of the composite system the collective. In this paper we use the Active Inference Formulation AIF, a framework for explaining the behavior of any nonequilibrium steady state system at any scale, to posit a minimal agentbased model that simulates the relationship between local individuallevel interaction and collective intelligence operationalized as systemlevel performance. We explore the effects of providing baseline AIF agents Model 1 with specific cognitive capabilities Theory of Mind Model 2; Goal Alignment Model 3, and Theory of Mind with Goal Alignment Model 4. These stepwise transitions in sophistication of cognitive ability are motivated by the types of advancements plausibly required for an AIF agent to persist and flourish in an environment populated by other AIF agents, and have also recently been shown to map naturally to canonical steps in human cognitive ability. Illustrative results show that stepwise cognitive transitions increase system performance by providing complementary mechanisms for alignment between agents' local and global optima. Alignment emerges endogenously from the dynamics of interacting AIF agents themselves, rather than being imposed exogenously by incentives to agents' behaviors contra existing computational models of collective intelligence or topdown priors for collective behavior contra existing multiscale simulations of AIF. These results shed light on the types of generic informationtheoretic patterns conducive to collective intelligence in human and other complex adaptive systems.
An Adversarial Imitation Click Model for Information Retrieval ; Modern information retrieval systems, including web search, ads placement, and recommender systems, typically rely on learning from user feedback. Click models, which study how users interact with a ranked list of items, provide a useful understanding of user feedback for learning ranking models. Constructing right dependencies is the key of any successful click model. However, probabilistic graphical models PGMs have to rely on manually assigned dependencies, and oversimplify user behaviors. Existing neural network based methods promote PGMs by enhancing the expressive ability and allowing flexible dependencies, but still suffer from exposure bias and inferior estimation. In this paper, we propose a novel framework, Adversarial Imitation Click Model AICM, based on imitation learning. Firstly, we explicitly learn the reward function that recovers users' intrinsic utility and underlying intentions. Secondly, we model user interactions with a ranked list as a dynamic system instead of onestep click prediction, alleviating the exposure bias problem. Finally, we minimize the JS divergence through adversarial training and learn a stable distribution of click sequences, which makes AICM generalize well across different distributions of ranked lists. A theoretical analysis has indicated that AICM reduces the exposure bias from OT2 to OT. Our studies on a public web search dataset show that AICM not only outperforms stateoftheart models in traditional click metrics but also achieves superior performance in addressing the exposure bias and recovering the underlying patterns of click sequences.
The TopFlavor scheme in the context of W' searches at LHC ; Many extensions of the Standard Model predict the existence of new charged or neutral gauge bosons, with a wide variety of phenomenological implications depending on the model adopted. The search for such particles is extensively carried through at the Large Hadron Collider LHC, and it is therefore of crucial importance to have for each proposed scenario quantitative predictions that can be matched to experiments. In this work we focus on the implications of one of these models, the TopFlavor Model, proposing a charged textWprime boson that has preferential couplings to the third generation fermions. We compare such predictions to the ones from the so called Sequential Standard Model SSM, that is used as benchmark, being one of the simplest and most commonly considered models for searches at the LHC. We identify the parameter space still open for searches at the LHC, and in particular we show that the cross section for the processes pp to textWprime to tau nu and pp to textWprime to tb can be up to two orders of magnitude smaller with respect to the SSM, depending on the free parameters of the model, like the particle mass and its width. This study makes the case for further searches at the LHC, and shows how a complete and systematic model independent analysis of textWprime boson phenomenology at colliders is essential to provide guidance for future searches.
Spin2 KK Mode Scattering in Models with a Massive Radion ; We calculate treelevel scattering amplitudes of massive spin2 KK particles in models of stabilized compact extradimensional theories. Naively introducing a mass for the radion in an extradimensional model without accounting for the dynamics responsible for stabilizing the extra dimension upsets the cancellations relating the masses and couplings of the spin2 modes, resulting in KK scattering amplitudes which grow like E4 instead of E2. We therefore investigate scattering of the KaluzaKlein states in theories incorporating the GoldbergerWise mechanism to stabilize the size of the extra dimension. We demonstrate that the cancellations occur only when one includes not only the massive radion, but also the massive spin0 modes arising from the GoldbergerWise scalar. We compute the revised sum rules which are satisfied in a stabilized model to ensure a consistent highenergy scattering amplitude. We introduce a simple model of a stabilized extra dimension which is a small deformation of a flat toroidal fivedimensional model, and demonstrate the cancellations in computations performed to leading nontrivial order in the deformation. These results are the first complete KK scattering computation in an extradimensional model with a stabilized extra dimension, with implications for the theory and phenomenology of these models.
Distill on the Go Online knowledge distillation in selfsupervised learning ; Selfsupervised learning solves pretext prediction tasks that do not require annotations to learn feature representations. For vision tasks, pretext tasks such as predicting rotation, solving jigsaw are solely created from the input data. Yet, predicting this known information helps in learning representations useful for downstream tasks. However, recent works have shown that wider and deeper models benefit more from selfsupervised learning than smaller models. To address the issue of selfsupervised pretraining of smaller models, we propose DistillontheGo DoGo, a selfsupervised learning paradigm using singlestage online knowledge distillation to improve the representation quality of the smaller models. We employ deep mutual learning strategy in which two models collaboratively learn from each other to improve one another. Specifically, each model is trained using selfsupervised learning along with distillation that aligns each model's softmax probabilities of similarity scores with that of the peer model. We conduct extensive experiments on multiple benchmark datasets, learning objectives, and architectures to demonstrate the potential of our proposed method. Our results show significant performance gain in the presence of noisy and limited labels and generalization to outofdistribution data.
Development of digitally obtainable 10year risk scores for depression and anxiety in the general population ; The burden of depression and anxiety in the world is rising. Identification of individuals at increased risk of developing these conditions would help to target them for prevention and ultimately reduce the healthcare burden. We developed a 10year predictive algorithm for depression and anxiety using the full cohort of over 400,000 UK Biobank UKB participants without preexisting depression or anxiety using digitally obtainable information. From the initial 204 variables selected from UKB, processed into 520 features, iterative backward elimination using Cox proportional hazards model was performed to select predictors which account for the majority of its predictive capability. Baseline and reduced models were then trained for depression and anxiety using both Cox and DeepSurv, a deep neural network approach to survival analysis. The baseline Cox model achieved concordance of 0.813 and 0.778 on the validation dataset for depression and anxiety, respectively. For the DeepSurv model, respective concordance indices were 0.805 and 0.774. After feature selection, the depression model contained 43 predictors and the concordance index was 0.801 for both Cox and DeepSurv. The reduced anxiety model, with 27 predictors, achieved concordance of 0.770 in both models. The final models showed good discrimination and calibration in the test datasets.We developed predictive risk scores with high discrimination for depression and anxiety using the UKB cohort, incorporating predictors which are easily obtainable via smartphone. If deployed in a digital solution, it would allow individuals to track their risk, as well as provide some pointers to how to decrease it through lifestyle changes.
Climate Modelling in LowPrecision Effects of both Deterministic Stochastic Rounding ; Motivated by recent advances in operational weather forecasting, we study the efficacy of lowprecision arithmetic for climate simulations. We develop a framework to measure rounding error in a climate model which provides a stresstest for a lowprecision version of the model, and we apply our method to a variety of models including the Lorenz system; a shallow water approximation for flow over a ridge; and a coarse resolution global atmospheric model with simplified parameterisations SPEEDY. Although double precision 52 significant bits is standard across operational climate models, in our experiments we find that single precision 23 sbits is more than enough and that as low as half precision 10 sbits is often sufficient. For example, SPEEDY can be run with 12 sbits across the entire code with negligible rounding error and this can be lowered to 10 sbits if very minor errors are accepted, amounting to less than 0.1 mm6hr for the average gridpoint precipitation, for example. Our test is based on the Wasserstein metric and this provides stringent nonparametric bounds on rounding error accounting for annual means as well as extreme weather events. In addition, by testing models using both roundtonearest RN and stochastic rounding SR we find that SR can mitigate rounding error across a range of applications. Thus our results also provide evidence that SR could be relevant to nextgeneration climate models. While many studies have shown that lowprecision arithmetic can be suitable on shortterm weather forecasting timescales, our results give the first evidence that a similar low precision level can be suitable for climate.
Rapid Aerodynamic Shape Optimization Under Parametric and Turbulence Model Uncertainty A Stochastic Gradient Approach ; Aerodynamic optimization is ubiquitous in the design of most engineering systems interacting with fluids. A common approach is to optimize a performance function defined by a choice of an aerodynamic model, e.g., turbulence RANS model, and at nominal operating conditions. Practical experience indicates that such a deterministic approach may result in considerably suboptimal designs when the adopted aerodynamic model does not lead to accurate flow predictions or when the actual operating conditions differ from those considered in the design. One approach to address this shortcoming is to consider an average or robust design, wherein the statistical moments of the performance function, given the uncertainty in the operating conditions and the aerodynamic model, is optimized. However, when the number of uncertain inputs is large or the performance function exhibits significant variability, an accurate evaluation of these moments may require a large number of forward andor adjoint solves, at each iteration of a gradientbased scheme. This, in turn, renders the design computationally expensive, if not infeasible. To tackle this difficulty, we consider a variant of the stochastic gradient descent method where, in each optimization iteration, a stochastic approximation of the objective, constraints, and their gradients are generated. This is done via a small number of forwardadjoint solves corresponding to random selections of the uncertain parameters and aerodynamic model. The methodology is applied to the robust optimization of the standard NACA0012 subject to parametric and turbulence model uncertainty. With a cost that is a small factor larger than that of the deterministic approach, the stochastic gradient approach significantly improves the performance mean and variance of the aerodynamic design for a wide range of operating conditions and turbulence models.
Occam Factor for Gaussian Models With Unknown Variance Structure ; We discuss model selection to determine whether the variancecovariance matrix of a multivariate Gaussian model with known mean should be considered to be a constant diagonal, a nonconstant diagonal, or an arbitrary positive definite matrix. Of particular interest is the relationship between Bayesian evidence and the flexibility penalty due to Priebe and Rougier. For the case of an exponential family in canonical form equipped with a conjugate prior for the canonical parameter, flexibility may be exactly decomposed into the usual BIC likelihood penalty and a Op1 term, the latter of which we explicitly compute. We also investigate the asymptotics of Bayes factors for linearly nested canonical exponential families equipped with conjugate priors; in particular, we find the exact rates at which Bayes factors correctly diverge in favor of the correct model linearly and logarithmically in the number of observations when the full and nested models are true, respectively. Such theoretical considerations for the general case permit us to fully express the asymptotic behavior of flexibility and Bayes factors for the variancecovariance structure selection problem when we assume that the prior for the model precision is a member of the gammaWishart family of distributions or is uninformative. Simulations demonstrate evidence's immediate and superior performance in model selection compared to approximate criteria such as the BIC. We extend the framework to the multivariate Gaussian linear model with three datadriven examples.
Realtime Deep Dynamic Characters ; We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multiview imagery. In contrast to previous work, our controllable 3D character displays dynamics, e.g., the swing of the skirt, dependent on skeletal body motion in an efficient datadriven way, without requiring complex physics simulation. Our character model also features a learned dynamic texture model that accounts for photorealistic motiondependent appearance details, as well as viewdependent lighting effects. During training, we do not need to resort to difficult dynamic 3D capture of the human; instead we can train our model entirely from multiview video in a weakly supervised manner. To this end, we propose a parametric and differentiable character representation which allows us to model coarse and fine dynamic deformations, e.g., garment wrinkles, as explicit spacetime coherent mesh geometry that is augmented with highquality dynamic textures dependent on motion and view point. As input to the model, only an arbitrary 3D skeleton motion is required, making it directly compatible with the established 3D animation pipeline. We use a novel graph convolutional network architecture to enable motiondependent deformation learning of body and clothing, including dynamics, and a neural generative dynamic texture model creates corresponding dynamic texture maps. We show that by merely providing new skeletal motions, our model creates motiondependent surface deformations, physically plausible dynamic clothing deformations, as well as videorealistic surface textures at a much higher level of detail than previous state of the art approaches, and even in realtime.
Adaptive sequential Monte Carlo for posterior inference and model selection among complex geological priors ; Bayesian model selection enables comparison and ranking of conceptual subsurface models described by spatial prior models, according to the support provided by available geophysical data. Deep generative neural networks can efficiently encode such complex spatial priors, thereby, allowing for a strong model dimensionality reduction that comes at the price of enhanced nonlinearity. In this setting, we explore a recent adaptive sequential Monte Carlo ASMC approach that builds on Annealed Importance Sampling AIS; a method that provides both the posterior probability density function PDF and the evidence a central quantity for Bayesian model selection through a particle approximation. Both techniques are well suited to parallel computation and rely on importance sampling over a sequence of intermediate distributions, linking the prior and the posterior PDF. Each subsequent distribution is approximated by updating the particle weights and states, compared with the previous approximation, using a small predefined number of Markov chain Monte Carlo MCMC proposal steps. Compared with AIS, the ASMC method adaptively tunes the tempering between neighboring distributions and performs resampling of particles when the variance of the particle weights becomes too large. We evaluate ASMC using two different conceptual models and associated synthetic crosshole ground penetrating radar GPR tomography data. For the most challenging test case, we find that the ASMC method is faster and more reliable in locating the posterior PDF than stateoftheart adaptive MCMC. The evidence estimates are found to be robust with respect to the choice of ASMC algorithmic variables and much less sensitive to the model proposal type than MCMC....
Interpretable machine learning for highdimensional trajectories of aging health ; We have built a computational model for individual aging trajectories of health and survival, which contains physical, functional, and biological variables, and is conditioned on demographic, lifestyle, and medical background information. We combine techniques of modern machine learning with an interpretable interaction network, where health variables are coupled by explicit pairwise interactions within a stochastic dynamical system. Our dynamic joint interpretable network DJIN model is scalable to large longitudinal data sets, is predictive of individual highdimensional health trajectories and survival from baseline health states, and infers an interpretable network of directed interactions between the health variables. The network identifies plausible physiological connections between health variables as well as clusters of strongly connected health variables. We use English Longitudinal Study of Aging ELSA data to train our model and show that it performs better than multiple dedicated linear models for health outcomes and survival. We compare our model with flexible lowerdimensional latentspace models to explore the dimensionality required to accurately model aging health outcomes. Our DJIN model can be used to generate synthetic individuals that age realistically, to impute missing data, and to simulate future aging outcomes given arbitrary initial health states.
A Twin Neural Model for Uplift ; Uplift is a particular case of conditional treatment effect modeling. Such models deal with causeandeffect inference for a specific factor, such as a marketing intervention or a medical treatment. In practice, these models are built on individual data from randomized clinical trials where the goal is to partition the participants into heterogeneous groups depending on the uplift. Most existing approaches are adaptations of random forests for the uplift case. Several split criteria have been proposed in the literature, all relying on maximizing heterogeneity. However, in practice, these approaches are prone to overfitting. In this work, we bring a new vision to uplift modeling. We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk. Our solution is developed for a specific twin neural network architecture allowing to jointly optimize the marginal probabilities of success for treated and control individuals. We show that this model is a generalization of the uplift logistic interaction model. We modify the stochastic gradient descent algorithm to allow for structured sparse solutions. This helps training our uplift models to a great extent. We show our proposed method is competitive with the stateoftheart in simulation setting and on real data from large scale randomized experiments.
CortadoAn Interactive Tool for DataDriven Process Discovery and Modeling ; Process mining aims to diagnose and improve operational processes. Process mining techniques allow analyzing the event data generated and recorded during the execution of business processes to gain valuable insights. Process discovery is a key discipline in process mining that comprises the discovery of process models on the basis of the recorded event data. Most process discovery algorithms work in a fully automated fashion. Apart from adjusting their configuration parameters, conventional process discovery algorithms offer limited to no user interaction, i.e., we either edit the discovered process model by hand or change the algorithm's input by, for instance, filtering the event data. However, recent work indicates that the integration of domain knowledge in semiautomated process discovery algorithms often enhances the quality of the process models discovered. Therefore, this paper introduces Cortado, a novel process discovery tool that leverages domain knowledge while incrementally discovering a process model from given event data. Starting from an initial process model, Cortado enables the user to incrementally add new process behavior to the process model under construction in a visual and intuitive manner. As such, Cortado unifies the world of manual process modeling with that of automated process discovery.
UX Ori Stars Eclipses by LargeScale Disc Perturbations ; We simulate the polarized radiative transfer in vicinities of the UX Ori type stars during their minima. Our model of an eclipse by an extended disc perturbation generalizes the compact gasdust cloud eclipse model. We apply the radiative transfer method based on enumeration using the directions grid to model the influence of the perturbation extensions along azimuth and radius on the eclipse depth and parameters of the linear polarization. We investigate eclipses both for the flared disc and for the disc with a puffingup in the dust sublimation zone. The puffingup is obtained by adding a disc wind to the model. Comparison with a compact cloud eclipse model reveals that the eclipse by a largescale azimuthally extended perturbation may be significantly deeper and show a greater linear polarization degree. We also demonstrate that the perturbation extension together with the disc puffingup can strongly affect the degree of polarization and colour index of the star during the eclipse. The position angle of the linear polarization may also change markedly during and after an eclipse by a large scale perturbation for the model with a puffedup inner rim. Also, in this model, the maximum degree of the linear polarization can be achieved not at the brightness minimum but closer to the end of the eclipse. We discuss the modelling results in the context of the photopolarimetric observations of UX Ori stars.
Exploring TexttoText Transformers for English to Hinglish Machine Translation with Synthetic CodeMixing ; We describe models focused at the understudied problem of translating between monolingual and codemixed language pairs. More specifically, we offer a wide range of models that convert monolingual English text into Hinglish codemixed Hindi and English. Given the recent success of pretrained language models, we also test the utility of two recent Transformerbased encoderdecoder models i.e., mT5 and mBART on the task finding both to work well. Given the paucity of training data for codemixing, we also propose a dependencyfree method for generating codemixed texts from bilingual distributed representations that we exploit for improving language model performance. In particular, armed with this additional data, we adopt a curriculum learning approach where we first finetune the language models on synthetic data then on gold codemixed data. We find that, although simple, our synthetic codemixing method is competitive with and in some cases is even superior to several standard methods backtranslation, method based on equivalence constraint theory under a diverse set of conditions. Our work shows that the mT5 model, finetuned following the curriculum learning procedure, achieves best translation performance 12.67 BLEU. Our models place first in the overall ranking of the EnglishHinglish official shared task.
A tutorial on reproducing a predefined autocovariance function through AR models Application to stationary homogeneous isotropic turbulence ; Sequential methods for synthetic realisation of random processes have a number of advantages compared with spectral methods. In this article, the determination of optimal autoregressive AR models for reproducing a predefined target autocovariance function of a random process is addressed. To this end, a novel formulation of the problem is developed. This formulation is linear and generalises the wellknown YuleWalker YW equations and a recent approach based on restricted AR models KrenkMoller approach, KM. Two main features characterise the introduced formulation i flexibility in the choice for the autocovariance equations employed in the model determination, and ii flexibility in the definition of the AR model scheme. Both features were exploited by a genetic algorithm to obtain optimal AR models for the particular case of synthetic generation of homogeneous stationary isotropic turbulence time series. The obtained models improved those obtained with the YW and KM approaches for the same model parsimony in terms of the global fitting of the target autocovariance function. Implications for the reproduced spectra are also discussed. The formulation for the multivariate case is also presented, highlighting the causes behind some computational bottlenecks.
Predicting Aqueous Solubility of Organic Molecules Using Deep Learning Models with Varied Molecular Representations ; Determining the aqueous solubility of molecules is a vital step in many pharmaceutical, environmental, and energy storage applications. Despite efforts made over decades, there are still challenges associated with developing a solubility prediction model with satisfactory accuracy for many of these applications. The goal of this study is to develop a general model capable of predicting the solubility of a broad range of organic molecules. Using the largest currently available solubility dataset, we implement deep learningbased models to predict solubility from molecular structure and explore several different molecular representations including molecular descriptors, simplified molecularinput lineentry system SMILES strings, molecular graphs, and threedimensional 3D atomic coordinates using four different neural network architectures fully connected neural networks FCNNs, recurrent neural networks RNNs, graph neural networks GNNs, and SchNet. We find that models using molecular descriptors achieve the best performance, with GNN models also achieving good performance. We perform extensive error analysis to understand the molecular properties that influence model performance, perform feature analysis to understand which information about molecular structure is most valuable for prediction, and perform a transfer learning and data size study to understand the impact of data availability on model performance.
Business Suitability Principles for Workflow Modelling ; By incorporating aspects of coordination and collaboration, workflow implementations of information systems require a sound conceptualisation of EMbusiness processing semantics. Traditionally, the success of conceptual modelling techniques has depended largely on the adequacy of conceptualisation, expressive power, comprehensibility and formal foundation. An equally important requirement, particularly with the increased conceptualisation of business aspects, is EMbusiness suitability. In this paper, the focus is on the business suitability of workflow modelling for a commonly encountered class of operational business processing, e.g. those of insurance claims, bank loans and land conveyancing. A general assessment is first conducted on some EMintegrated techniques characterising wellknown paradigms structured process modelling, objectoriented modelling, behavioural process modelling and businessoriented modelling. Through this, an insight into business suitability within the broader perspective of technique adequacy, is gained. A specific business suitability diagnosis then follows using a particular characterisation of business processing, i.e. one where the intuitive semantics and interrelationship of business services and business processes are nuanced. As a result, five business suitability principles are elicited. These are proposed for a more detailed understanding and synthetic development of workflow modelling techniques. Accordingly, further insight into workflow specification languages and workflow globalisation in open distributed architectures may also be gained.
Bayesian OriginDestination Estimation in Networked Transit Systems using Nodal In and Outflow Counts ; We propose a Bayesian inference approach for static OriginDestination ODestimation in largescale networked transit systems. The approach finds posterior distribution estimates of the ODcoefficients, which describe the relative proportions of passengers travelling between origin and destination locations, via a Hamiltonian Monte Carlo sampling procedure. We suggest two different inference model formulations the instantaneousbalance and averagedelay model. We discuss both models' sensitivity to various count observation properties, and establish that the averagedelay model is generally more robust in determining the coefficient posteriors. The instantaneousbalance model, however, requires lower resolution count observations and produces comparably accurate estimates as the averagedelay model, pending that count observations are only moderately interfered by trend fluctuations or the truncation of the observation window, and sufficient number of dispersed data records are available. We demonstrate that the Bayesian posterior distribution estimates provide quantifiable measures of the estimation uncertainty and prediction quality of the model, whereas the point estimates obtained from an alternative constrained quadratic programming optimisation approach only provide the residual errors between the predictions and observations. Moreover, the Bayesian approach proves more robust in scaling to highdimensional underdetermined problems. The Bayesian instantaneousbalance ODcoefficient posteriors are determined for the New York City NYC subway network, based on several years of entry and exit count observations recorded at station turnstiles across the network. The averagedelay model proves intractable on the realworld test scenario, given its computational time complexity and the incompleteness as well as coarseness of the turnstile records.
ViPTTNet Video pretraining of spatiotemporal model for tuberculosis type classification from chest CT scans ; Pretraining has sparked groundswell of interest in deep learning workflows to learn from limited data and improve generalization. While this is common for 2D image classification tasks, its application to 3D medical imaging tasks like chest CT interpretation is limited. We explore the idea of whether pretraining a model on realistic videos could improve performance rather than training the model from scratch, intended for tuberculosis type classification from chest CT scans. To incorporate both spatial and temporal features, we develop a hybrid convolutional neural network CNN and recurrent neural network RNN model, where the features are extracted from each axial slice of the CT scan by a CNN, these sequence of image features are input to a RNN for classification of the CT scan. Our model termed as ViPTTNet, was trained on over 1300 video clips with labels of human activities, and then finetuned on chest CT scans with labels of tuberculosis type. We find that pretraining the model on videos lead to better representations and significantly improved model validation performance from a kappa score of 0.17 to 0.35, especially for underrepresented class samples. Our best method achieved 2nd place in the ImageCLEF 2021 Tuberculosis TBT classification task with a kappa score of 0.20 on the final test set with only image information without using clinical metadata. All codes and models are made available.
Achieving Fairness with a Simple Ridge Penalty ; In this paper we present a general framework for estimating regression models subject to a userdefined level of fairness. We enforce fairness as a model selection step in which we choose the value of a ridge penalty to control the effect of sensitive attributes. We then estimate the parameters of the model conditional on the chosen penalty value. Our proposal is mathematically simple, with a solution that is partly in closed form, and produces estimates of the regression coefficients that are intuitive to interpret as a function of the level of fairness. Furthermore, it is easily extended to generalised linear models, kernelised regression models and other penalties; and it can accommodate multiple definitions of fairness. We compare our approach with the regression model from Komiyama et al. 2018, which implements a provablyoptimal linear regression model; and with the fair models from Zafar et al. 2019. We evaluate these approaches empirically on six different data sets, and we find that our proposal provides better goodness of fit and better predictive accuracy for the same level of fairness. In addition, we highlight a source of bias in the original experimental evaluation in Komiyama et al. 2018.
Privileged Graph Distillation for Cold Start Recommendation ; The cold start problem in recommender systems is a longstanding challenge, which requires recommending to new users items based on attributes without any historical interaction records. In these recommendation systems, warm users items have privileged collaborative signals of interaction records compared to cold start users items, and these Collaborative Filtering CF signals are shown to have competing performance for recommendation. Many researchers proposed to learn the correlation between collaborative signal embedding space and the attribute embedding space to improve the cold start recommendation, in which user and item categorical attributes are available in many online platforms. However, the cold start recommendation is still limited by two embedding spaces modeling and simple assumptions of space transformation. As useritem interaction behaviors and user item attributes naturally form a heterogeneous graph structure, in this paper, we propose a privileged graph distillation modelPGD. The teacher model is composed of a heterogeneous graph structure for warm users and items with privileged CF links. The student model is composed of an entityattribute graph without CF links. Specifically, the teacher model can learn better embeddings of each entity by injecting complex higherorder relationships from the constructed heterogeneous graph. The student model can learn the distilled output with privileged CF embeddings from the teacher embeddings. Our proposed model is generally applicable to different cold start scenarios with new user, new item, or new usernew item. Finally, extensive experimental results on the realworld datasets clearly show the effectiveness of our proposed model on different types of cold start problems, with average 6.6, 5.6, and 17.1 improvement over stateoftheart baselines on three datasets, respectively.
Offline Reinforcement Learning as One Big Sequence Modeling Problem ; Reinforcement learning RL is typically concerned with estimating stationary policies or singlestep models, leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence modeling problem, with the goal being to produce a sequence of actions that leads to a sequence of high rewards. Viewed in this way, it is tempting to consider whether highcapacity sequence prediction models that work well in other domains, such as naturallanguage processing, can also provide effective solutions to the RL problem. To this end, we explore how RL can be tackled with the tools of sequence modeling, using a Transformer architecture to model distributions over trajectories and repurposing beam search as a planning algorithm. Framing RL as sequence modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common in offline RL algorithms. We demonstrate the flexibility of this approach across longhorizon dynamics prediction, imitation learning, goalconditioned RL, and offline RL. Further, we show that this approach can be combined with existing modelfree algorithms to yield a stateoftheart planner in sparsereward, longhorizon tasks.
Dynamical properties of different models of elastic polymer rings confirming the link between deformation and fragility ; We report extensive numerical simulations of different models of 2D polymer rings with internal elasticity. We monitor the dynamical behavior of the rings as a function of the packing fraction, to address the effects of particle deformation on the collective response of the system. In particular, we compare three different models i a recently investigated model Gnan Zaccarelli, Nat. Phys. 15, 683 2019, where an inner hertzian field providing the internal elasticity acts on the monomers of the ring, ii the same model where the effect of such a field on the center of mass is balanced by opposite forces and iii a semiflexible model where an angular potential between adjacent monomers induces strong particle deformations. By analyzing the dynamics of the three models, we find that, in all cases, there exists a direct link between the system fragility and particle asphericity. Among the three, only the first model displays anomalous dynamics in the form of a superdiffusive behavior of the mean squared displacement and of a compressed exponential relaxation of the density autocorrelation function. We show that this is due to the combination of internal elasticity and the outofequilibrium force selfgenerated by each ring, both of which are necessary ingredients to induce such peculiar behavior often observed in experiments of colloidal gels. These findings reinforce the role of particle deformation, connected to internal elasticity, in driving the dynamical response of dense soft particles.
Minibatch and Momentum Modelbased Methods for Stochastic Weakly Convex Optimization ; Stochastic modelbased methods have received increasing attention lately due to their appealing robustness to the stepsize selection and provable efficiency guarantee. We make two important extensions for improving modelbased methods on stochastic weakly convex optimization. First, we propose new minibatch modelbased methods by involving a set of samples to approximate the model function in each iteration. For the first time, we show that stochastic algorithms achieve linear speedup over the batch size even for nonsmooth and nonconvex particularly, weakly convex problems. To this end, we develop a novel sensitivity analysis of the proximal mapping involved in each algorithm iteration. Our analysis appears to be of independent interests in more general settings. Second, motivated by the success of momentum stochastic gradient descent, we propose a new stochastic extrapolated modelbased method, greatly extending the classic Polyak momentum technique to a wider class of stochastic algorithms for weakly convex optimization. The rate of convergence to some natural stationarity condition is established over a fairly flexible range of extrapolation terms. While mainly focusing on weakly convex optimization, we also extend our work to convex optimization. We apply the minibatch and extrapolated modelbased methods to stochastic convex optimization, for which we provide a new complexity bound and promising linear speedup in batch size. Moreover, an accelerated modelbased method based on Nesterov's momentum is presented, for which we establish an optimal complexity bound for reaching optimality.
Measuring and Improving BERT's Mathematical Abilities by Predicting the Order of Reasoning ; Imagine you are in a supermarket. You have two bananas in your basket and want to buy four apples. How many fruits do you have in total This seemingly straightforward question can be challenging for datadriven language models, even if trained at scale. However, we would expect such generic language models to possess some mathematical abilities in addition to typical linguistic competence. Towards this goal, we investigate if a commonly used language model, BERT, possesses such mathematical abilities and, if so, to what degree. For that, we finetune BERT on a popular dataset for word math problems, AQuARAT, and conduct several tests to understand learned representations better. Since we teach models trained on natural language to do formal mathematics, we hypothesize that such models would benefit from training on semiformal steps that explain how math results are derived. To better accommodate such training, we also propose new pretext tasks for learning mathematical rules. We call them Neighbor Reasoning Order Prediction ROP or NROP. With this new model, we achieve significantly better outcomes than datadriven baselines and even onpar with more tailored models. We also show how to reduce positional bias in such models.
Optimization of Service Addition in Multilevel Index Model for Edge Computing ; With the development of Edge Computing and Artificial Intelligence AI technologies, edge devices are witnessed to generate data at unprecedented volume. The Edge Intelligence EI has led to the emergence of edge devices in various application domains. The EI can provide efficient services to delaysensitive applications, where the edge devices are deployed as edge nodes to host the majority of execution, which can effectively manage services and improve service discovery efficiency. The multilevel index model is a wellknown model used for indexing service, such a model is being introduced and optimized in the edge environments to efficiently services discovery whilst managing large volumes of data. However, effectively updating the multilevel index model by adding new services timely and precisely in the dynamic Edge Computing environments is still a challenge. Addressing this issue, this paper proposes a designated key selection method to improve the efficiency of adding services in the multilevel index models. Our experimental results show that in the partial index and the full index of multilevel index model, our method reduces the service addition time by around 84 and 76, respectively when compared with the original key selection method and by around 78 and 66, respectively when compared with the random selection method. Our proposed method significantly improves the service addition efficiency in the multilevel index model, when compared with existing stateoftheart key selection methods, without compromising the service retrieval stability to any notable level.
A Revised Description of the Cosmic RayInduced Desorption of Interstellar Ices ; Nonthermal desorption of ices on interstellar grains is required to explain observations of molecules that are not synthesized efficiently in the gas phase in cold dense clouds. Perhaps the most important nonthermal desorption mechanism is one induced by cosmic rays CRs, which, when passing through a grain, heat it transiently to a high temperature the grain cools back to its original equilibrium temperature via the partial sublimation of the ice. Current cosmicrayinduced desorption CRD models assume a fixed grain cooling time. In this work we present a revised description of CRD in which the desorption efficiency depends dynamically on the ice content. We apply the revised desorption scheme to twophase and threephase chemical models in physical conditions corresponding to starless and prestellar cores, and to molecular cloud envelopes. We find that inside starless and prestellar cores, introducing dynamic CRD can decrease gasphase abundances by up to an order of magnitude in twophase chemical models. In threephase chemical models our model produces very similar results to the static cooling scheme when only one monolayer of ice is considered active. Ice abundances are generally insensitive to variations in the grain cooling time. Further improved CRD models need to take into account additional effects in the transient heating of the grains, introduced for example by the adoption of a spectrum of CR energies.
Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features ; Despite the high accuracy offered by stateoftheart deep naturallanguage models e.g. LSTM, BERT, their application in reallife settings is still widely limited, as they behave like a blackbox to the enduser. Hence, explainability is rapidly becoming a fundamental requirement of futuregeneration datadriven systems based on deeplearning approaches. Several attempts to fulfill the existing gap between accuracy and interpretability have been done. However, robust and specialized xAI Explainable Artificial Intelligence solutions tailored to deep naturallanguage models are still missing. We propose a new framework, named TEBAnO, which provides innovative predictionlocal and classbased modelglobal explanation strategies tailored to blackbox deep naturallanguage models. Given a deep NLP model and the textual input data, TEBAnO provides an objective, humanreadable, domainspecific assessment of the reasons behind the automatic decisionmaking process. Specifically, the framework extracts sets of interpretable features mining the inner knowledge of the model. Then, it quantifies the influence of each feature during the prediction process by exploiting the novel normalized Perturbation Influence Relation index at the local level and the novel Global Absolute Influence and Global Relative Influence indexes at the global level. The effectiveness and the quality of the local and global explanations obtained with TEBAnO are proved on i a sentiment analysis task performed by a finetuned BERT model, and ii a toxic comment classification task performed by an LSTM model.
NonTransferable Learning A New Approach for Model Ownership Verification and Applicability Authorization ; As Artificial Intelligence as a Service gains popularity, protecting welltrained models as intellectual property is becoming increasingly important. There are two common types of protection methods ownership verification and usage authorization. In this paper, we propose NonTransferable Learning NTL, a novel approach that captures the exclusive data representation in the learned model and restricts the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically 1 For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, our NTLbased ownership verification provides robust resistance to stateoftheart watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 STL10, and VisDA datasets. 2 For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. Our NTLbased authorization approach instead provides datacentric protection, which we call applicability authorization, by significantly degrading the performance of the model on unauthorized data. Its effectiveness is also shown through experiments on the aforementioned datasets.
DGLLifeSci An OpenSource Toolkit for Deep Learning on Graphs in Life Science ; Graph neural networks GNNs constitute a class of deep learning methods for graph data. They have wide applications in chemistry and biology, such as molecular property prediction, reaction prediction and drugtarget interaction prediction. Despite the interest, GNNbased modeling is challenging as it requires graph data preprocessing and modeling in addition to programming and deep learning. Here we present DGLLifeSci, an opensource package for deep learning on graphs in life science. DGLLifeSci is a python toolkit based on RDKit, PyTorch and Deep Graph Library DGL. DGLLifeSci allows GNNbased modeling on custom datasets for molecular property prediction, reaction prediction and molecule generation. With its commandline interfaces, users can perform modeling without any background in programming and deep learning. We test the commandline interfaces using standard benchmarks MoleculeNet, USPTO, and ZINC. Compared with previous implementations, DGLLifeSci achieves a speed up by up to 6x. For modeling flexibility, DGLLifeSci provides welloptimized modules for various stages of the modeling pipeline. In addition, DGLLifeSci provides pretrained models for reproducing the test experiment results and applying models without training. The code is distributed under an Apache2.0 License and is freely accessible at httpsgithub.comawslabsdgllifesci.
Reinforcement Learning based Disease Progression Model for Alzheimer's Disease ; We model Alzheimer's disease AD progression by combining differential equations DEs and reinforcement learning RL with domain knowledge. DEs provide relationships between some, but not all, factors relevant to AD. We assume that the missing relationships must satisfy general criteria about the working of the brain, for e.g., maximizing cognition while minimizing the cost of supporting cognition. This allows us to extract the missing relationships by using RL to optimize an objective reward function that captures the above criteria. We use our model consisting of DEs as a simulator and the trained RL agent to predict individualized 10year AD progression using baseline year 0 features on synthetic and real data. The model was comparable or better at predicting 10year cognition trajectories than stateoftheart learningbased models. Our interpretable model demonstrated, and provided insights into, recoverycompensatory processes that mitigate the effect of AD, even though those processes were not explicitly encoded in the model. Our framework combines DEs with RL for modelling AD progression and has broad applicability for understanding other neurological disorders.
LEO Learning Energybased Models in Factor Graph Optimization ; We address the problem of learning observation models endtoend for estimation. Robots operating in partially observable environments must infer latent states from multiple sensory inputs using observation models that capture the joint distribution between latent states and observations. This inference problem can be formulated as an objective over a graph that optimizes for the most likely sequence of states using all previous measurements. Prior work uses observation models that are either known apriori or trained on surrogate losses independent of the graph optimizer. In this paper, we propose a method to directly optimize endtoend tracking performance by learning observation models with the graph optimizer in the loop. This direct approach may appear, however, to require the inference algorithm to be fully differentiable, which many stateoftheart graph optimizers are not. Our key insight is to instead formulate the problem as that of energybased learning. We propose a novel approach, LEO, for learning observation models endtoend with graph optimizers that may be nondifferentiable. LEO alternates between sampling trajectories from the graph posterior and updating the model to match these samples to ground truth trajectories. We propose a way to generate such samples efficiently using incremental GaussNewton solvers. We compare LEO against baselines on datasets drawn from two distinct tasks navigation and realworld planar pushing. We show that LEO is able to learn complex observation models with lower errors and fewer samples. Supplementary video httpsyoutu.beYqzlUPudfkA
Supervised Neural Networks for Illiquid Alternative Asset Cash Flow Forecasting ; Institutional investors have been increasing the allocation of the illiquid alternative assets such as private equity funds in their portfolios, yet there exists a very limited literature on cash flow forecasting of illiquid alternative assets. The net cash flow of private equity funds typically follow a Jcurve pattern, however the timing and the size of the contributions and distributions depend on the investment opportunities. In this paper, we develop a benchmark model and present two novel approaches direct vs. indirect to predict the cash flows of private equity funds. We introduce a sliding window approach to apply on our cash flow data because different vintage year funds contain different lengths of cash flow information. We then pass the data to an LSTM GRU model to predict the future cash flows either directly or indirectly based on the benchmark model. We further integrate macroeconomic indicators into our data, which allows us to consider the impact of market environment on cash flows and to apply stress testing. Our results indicate that the direct model is easier to implement compared to the benchmark model and the indirect model, but still the predicted cash flows align better with the actual cash flows. We also show that macroeconomic variables improve the performance of the direct model whereas the impact is not obvious on the indirect model.
PoolRank MaxMin Poolingbased Ranking Loss for Listwise Learning Ranking Balance ; Numerous neural retrieval models have been proposed in recent years. These models learn to compute a ranking score between the given query and document. The majority of existing models are trained in pairwise fashion using humanjudged labels directly without further calibration. The traditional pairwise schemes can be timeconsuming and require predefined positivenegative document pairs for training, potentially leading to learning bias due to document distribution mismatch between training and test conditions. Some popular existing listwise schemes rely on the strong predefined probabilistic assumptions and stark difference between relevant and nonrelevant documents for the given query, which may limit the model potential due to the lowquality or ambiguous relevance labels. To address these concerns, we turn to a physicsinspired ranking balance scheme and propose PoolRank, a poolingbased listwise learning framework. The proposed scheme has four major advantages 1 PoolRank extracts training information from the best candidates at the local level based on model performance and relative ranking among abundant document candidates. 2 By combining four poolingbased loss components in a multitask learning fashion, PoolRank calibrates the ranking balance for the partially relevant and the highly nonrelevant documents automatically without costly human inspection. 3 PoolRank can be easily generalized to any neural retrieval model without requiring additional learnable parameters or model structure modifications. 4 Compared to pairwise learning and existing listwise learning schemes, PoolRank yields better ranking performance for all studied retrieval models while retaining efficient convergence rates.
Mixture of Linear Models Cosupervised by Deep Neural Networks ; Deep neural network DNN models have achieved phenomenal success for applications in many domains, ranging from academic research in science and engineering to industry and business. The modeling power of DNN is believed to have come from the complexity and overparameterization of the model, which on the other hand has been criticized for the lack of interpretation. Although certainly not true for every application, in some applications, especially in economics, social science, healthcare industry, and administrative decision making, scientists or practitioners are resistant to use predictions made by a blackbox system for multiple reasons. One reason is that a major purpose of a study can be to make discoveries based upon the prediction function, e.g., to reveal the relationships between measurements. Another reason can be that the training dataset is not large enough to make researchers feel completely sure about a purely datadriven result. Being able to examine and interpret the prediction function will enable researchers to connect the result with existing knowledge or gain insights about new directions to explore. Although classic statistical models are much more explainable, their accuracy often falls considerably below DNN. In this paper, we propose an approach to fill the gap between relatively simple explainable models and DNN such that we can more flexibly tune the tradeoff between interpretability and accuracy. Our main idea is a mixture of discriminative models that is trained with the guidance from a DNN. Although mixtures of discriminative models have been studied before, our way of generating the mixture is quite different.
A datadriven peridynamic continuum model for upscaling molecular dynamics ; Nonlocal models, including peridynamics, often use integral operators that embed lengthscales in their definition. However, the integrands in these operators are difficult to define from the data that are typically available for a given physical system, such as laboratory mechanical property tests. In contrast, molecular dynamics MD does not require these integrands, but it suffers from computational limitations in the length and time scales it can address. To combine the strengths of both methods and to obtain a coarsegrained, homogenized continuum model that efficiently and accurately captures materials' behavior, we propose a learning framework to extract, from MD data, an optimal Linear Peridynamic Solid LPS model as a surrogate for MD displacements. To maximize the accuracy of the learnt model we allow the peridynamic influence function to be partially negative, while preserving the wellposedness of the resulting model. To achieve this, we provide sufficient wellposedness conditions for discretized LPS models with signchanging influence functions and develop a constrained optimization algorithm that minimizes the equation residual while enforcing such solvability conditions. This framework guarantees that the resulting model is mathematically wellposed, physically consistent, and that it generalizes well to settings that are different from the ones used during training. We illustrate the efficacy of the proposed approach with several numerical tests for single layer graphene. Our twodimensional tests show the robustness of the proposed algorithm on validation data sets that include thermal noise, different domain shapes and external loadings, and discretizations substantially different from the ones used for training.
BERTHop An Effective VisionandLanguage Model for Chest Xray Disease Diagnosis ; VisionandlanguageVL models take image and text as input and learn to capture the associations between them. Prior studies show that pretrained VL models can significantly improve the model performance for downstream tasks such as Visual Question Answering VQA. However, VL models are less effective when applied in the medical domain e.g., on Xray images and clinical notes due to the domain gap. In this paper, we investigate the challenges of applying pretrained VL models in medical applications. In particular, we identify that the visual representation in general VL models is not suitable for processing medical data. To overcome this limitation, we propose BERTHop, a transformerbased model based on PixelHop and VisualBERT, for better capturing the associations between the two modalities. Experiments on the OpenI dataset, a commonly used thoracic disease diagnosis benchmark, show that BERTHop achieves an average Area Under the Curve AUC of 98.12 which is 1.62 higher than stateoftheart SOTA while it is trained on a 9 times smaller dataset.
Physicsinspired architecture for neural network modeling of forces and torques in particleladen flows ; We present a physicsinspired neural network PINN model for direct prediction of hydrodynamic forces and torques experienced by individual particles in stationary beds of randomly distributed spheres. In line with our findings, it has recently been demonstrated that conventional fully connected neural networks FCNN are incapable of making accurate predictions of force variations in a static bed of spheres. The problem arises due to the large number of input variables i.e., the locations of individual neighboring particles leading to an overwhelmingly large number of training parameters in a fully connected architecture. Given the typically limited size of training datasets that can be generated by particleresolved simulations, the NN becomes prone to missing true patterns in the data, ultimately leading to overfitting. Inspired by our observations in developing the microstructureinformed probabilitydriven pointparticle MPP model, we incorporate two main features in the architecture of the present PINN model 1 superposition of pairwise hydrodynamic interactions between particles, and 2 sharing training parameters between NN blocks that model neighbor influences. These strategies helps to substantially reduce the number of free parameters and thereby control the model complexity without compromising accuracy. We demonstrate that direct force and torque prediction using NNs is indeed possible, provided that the model structure corresponds to the underlying physics of the problem. For a Reynolds number range of 2 Re 150 and solid volume fractions of 0.1 phi 0.4, the PINN's predictions prove to be as accurate as those of other microstructureinformed models.
Modeling IntenseElectronBeam Generated Plasmas Using a RigidBeam Approximation ; A model of an electronbeamplasma system is introduced to model the electrical breakdown physics of lowpressure nitrogen irradiated by an intense pulsed electron beam. The rapidly rising beam current induces an electric field which drives a return current in the plasma. The rigidbeam model is a reduction of the problem geometry to cylindrical coordinates and simplifications to Maxwell's equations that are driven by a prescribed electron beam current density. The model is convenient for comparing various reductions of the plasma dynamics and plasma chemistry while maintaining a good approximation to the overall magnitude of the beamcreated electric field. The usefulness of this model is demonstrated by coupling the rigidbeam model to a fluid plasma model and a simplified nitrogen plasma chemistry. The dynamics of this coupled system are computed for a range of background gas pressures, and the results are compared with experimental measurements. At pressures 1 Torr and above, the simulated lineintegrated electron densities are within a factor of two of measurements, and show the same trend with pressure as observed in experiment.
Characterizing stochastic cell cycle dynamics in exponential growth ; Two powerful and complementary experimental approaches are commonly used to study the cell cycle and cell biology One class of experiments characterizes the statistics or demographics of an unsynchronized exponentiallygrowing population, while the other captures cell cycle dynamics, either by timelapse imaging of full cell cycles or in bulk experiments on synchronized populations. In this paper, we study the subtle relationship between observations in these two distinct experimental approaches. We begin with an existing model a singlecell deterministic description of cell cycle dynamics where cell states i.e. periods or phases have precise lifetimes. We then generalize this description to a stochastic model in which the states have stochastic lifetimes, as described by arbitrary probability distribution functions. Our analyses of the demographics of an exponential culture reveal a simple and exact correspondence between the deterministic and stochastic models The corresponding state lifetimes in the deterministic model are equal to the exponential mean of the lifetimes in the stochastic model. An important implication is therefore that the demographics of an exponential culture will be wellfit by a deterministic model even if the state timing is stochastic. Although we explore the implications of the models in the context of the Escherichia coli cell cycle, we expect both the models as well as the significance of the exponentialmean lifetimes to find many applications in the quantitative analysis of cell cycle dynamics in other biological systems.
Learning EnergyBased Approximate Inference Networks for Structured Applications in NLP ; Structured prediction in natural language processing NLP has a long history. The complex models of structured application come at the difficulty of learning and inference. These difficulties lead researchers to focus more on models with simple structure components e.g., local classifier. Deep representation learning has become increasingly popular in recent years. The structure components of their method, on the other hand, are usually relatively simple. We concentrate on complex structured models in this dissertation. We provide a learning framework for complicated structured models as well as an inference method with a better speedaccuracysearch error tradeoff. The dissertation begins with a general introduction to energybased models. In NLP and other applications, an energy function is comparable to the concept of a scoring function. In this dissertation, we discuss the concept of the energy function and structured models with different energy functions. Then, we propose a method in which we train a neural network to do argmax inference under a structured energy function, referring to the trained networks as inference networks or energybased inference networks. We then develop ways of jointly learning energy functions and inference networks using an adversarial learning framework. Despite the inference and learning difficulties of energybased models, we present approaches in this thesis that enable energybased models more easily to be applied in structured NLP applications.
SummerTime Text Summarization Toolkit for Nonexperts ; Recent advances in summarization provide models that can generate summaries of higher quality. Such models now exist for a number of summarization tasks, including querybased summarization, dialogue summarization, and multidocument summarization. While such models and tasks are rapidly growing in the research field, it has also become challenging for nonexperts to keep track of them. To make summarization methods more accessible to a wider audience, we develop SummerTime by rethinking the summarization task from the perspective of an NLP nonexpert. SummerTime is a complete toolkit for text summarization, including various models, datasets and evaluation metrics, for a full spectrum of summarizationrelated tasks. SummerTime integrates with libraries designed for NLP researchers, and enables users with easytouse APIs. With SummerTime, users can locate pipeline solutions and search for the best model with their own data, and visualize the differences, all with a few lines of code. We also provide explanations for models and evaluation metrics to help users understand the model behaviors and select models that best suit their needs. Our library, along with a notebook demo, is available at httpsgithub.comYaleLILYSummerTime.
DataDriven Wind Turbine Wake Modeling via Probabilistic Machine Learning ; Wind farm design primarily depends on the variability of the wind turbine wake flows to the atmospheric wind conditions, and the interaction between wakes. Physicsbased models that capture the wake flowfield with highfidelity are computationally very expensive to perform layout optimization of wind farms, and, thus, datadriven reduced order models can represent an efficient alternative for simulating wind farms. In this work, we use realworld light detection and ranging LiDAR measurements of windturbine wakes to construct predictive surrogate models using machine learning. Specifically, we first demonstrate the use of deep autoencoders to find a lowdimensional emphlatent space that gives a computationally tractable approximation of the wake LiDAR measurements. Then, we learn the mapping between the parameter space and the latent space wake flowfields using a deep neural network. Additionally, we also demonstrate the use of a probabilistic machine learning technique, namely, Gaussian process modeling, to learn the parameterspacelatentspace mapping in addition to the epistemic and aleatoric uncertainty in the data. Finally, to cope with training large datasets, we demonstrate the use of variational Gaussian process models that provide a tractable alternative to the conventional Gaussian process models for large datasets. Furthermore, we introduce the use of active learning to adaptively build and improve a conventional Gaussian process model predictive capability. Overall, we find that our approach provides accurate approximations of the windturbine wake flow field that can be queried at an ordersofmagnitude cheaper cost than those generated with highfidelity physicsbased simulations.
Enhancing SelfDisclosure In Neural Dialog Models By Candidate Reranking ; Neural language modelling has progressed the stateoftheart in different downstream Natural Language Processing NLP tasks. One such area is of opendomain dialog modelling, neural dialog models based on GPT2 such as DialoGPT have shown promising performance in singleturn conversation. However, such neural dialog models have been criticized for generating responses which although may have relevance to the previous human response, tend to quickly dissipate human interest and descend into trivial conversation. One reason for such performance is the lack of explicit conversation strategy being employed in humanmachine conversation. Humans employ a range of conversation strategies while engaging in a conversation, one such key social strategies is SelfdisclosureSD. A phenomenon of revealing information about oneself to others. Social penetration theory SPT proposes that communication between two people moves from shallow to deeper levels as the relationship progresses primarily through selfdisclosure. Disclosure helps in creating rapport among the participants engaged in a conversation. In this paper, Selfdisclosure enhancement architecture SDEA is introduced utilizing Selfdisclosure Topic Model SDTM during inference stage of a neural dialog model to rerank response candidates to enhance selfdisclosure in singleturn responses from from the model.
Cosmological search for sterile neutrinos after Planck 2018 ; Sterile neutrinos can affect the evolution of the universe, and thus using the cosmological observations can search for sterile neutrinos. In this work, we use the cosmic microwave background CMB anisotropy data from the Planck 2018 release, combined with the latest baryon acoustic oscillation BAO, type Ia supernova SN, and Hubble constant H0 data, to constrain the cosmological models with considering sterile neutrinos. In order to test the influences of the properties of dark energy on the results of searching for sterile neutrinos, in addition to the Lambda cold dark matter LambdaCDM model, we also consider the wCDM model and the holographic dark energy HDE model. We find that the existence of sterile neutrinos is not preferred when the H0 local measurement is not included in the data combination. When the H0 measurement is included in the joint constraints, it is found that Delta Nrm eff0 is favored at about 2.7sigma level for the LambdaCDM model and at about 11.7sigma level for the wCDM model. However, mnu,rmsterilermeff still cannot be well constrained and only upper limits can be given. In addition, we find that the HDE model is definitely ruled out by the current data. We also discuss the issue of the Hubble tension, and we conclude that involving sterile neutrinos in the cosmological models cannot truly resolve the Hubble tension.
Beyond the Freshman's Dream Classical fractal spin liquids from matrix cellular automata in threedimensional lattice models ; We construct models hosting classical fractal spin liquids on two realistic threedimensional 3D lattices of cornersharing triangles trillium and hyperhyperkagome HHK. Both models involve the same form of threespin Ising interactions on triangular plaquettes as the NewmanMoore NM model on the 2D triangular lattice. However, in contrast to the NM model and its 3D generalizations, their degenerate ground states and lowlying excitations cannot be described in terms of scalar cellular automata CA, because the corresponding fractal structures lack a simplifying algebraic property, often termed the 'Freshman's dream'. By identifying a link to matrix CAs that makes essential use of the crystallographic structure we show that both models exhibit fractal symmetries of a distinct class to the NMtype models. We devise a procedure to explicitly construct lowenergy excitations consisting of finite sets of immobile defects or fractons, by flipping arbitrarily large selfsimilar subsets of spins, whose fractal dimensions we compute analytically. We show that these excitations are associated with energetic barriers which increase logarithmically with system size, leading to fragile glassy dynamics, whose existence we confirm via classical Monte Carlo simulations. We also discuss consequences for spontaneous fractal symmetry breaking when quantum fluctuations are introduced by a transverse magnetic field, and propose multispin correlation function diagnostics for such transitions. Our findings suggest that matrix CAs may provide a fruitful route to identifying fractal symmetries and fractonlike behaviour in lattice models, with possible implications for the study of fracton topological order.
YES SIROptimizing Semantic Space of Negatives with SelfInvolvement Ranker ; Pretrained model such as BERT has been proved to be an effective tool for dealing with Information Retrieval IR problems. Due to its inspiring performance, it has been widely used to tackle with realworld IR problems such as document ranking. Recently, researchers have found that selecting hard rather than random negative samples would be beneficial for finetuning pretrained models on ranking tasks. However, it remains elusive how to leverage hard negative samples in a principled way. To address the aforementioned issues, we propose a finetuning strategy for document ranking, namely SelfInvolvement Ranker SIR, to dynamically select hard negative samples to construct highquality semantic space for training a highquality ranking model. Specifically, SIR consists of sequential compressors implemented with pretrained models. Front compressor selects hard negative samples for rear compressor. Moreover, SIR leverages supervisory signal to adaptively adjust semantic space of negative samples. Finally, supervisory signal in rear compressor is computed based on condition probability and thus can control sample dynamic and further enhance the model performance. SIR is a lightweight and general framework for pretrained models, which simplifies the ranking process in industry practice. We test our proposed solution on MS MARCO with document ranking setting, and the results show that SIR can significantly improve the ranking performance of various pretrained models. Moreover, our method became the new SOTA model anonymously on MS MARCO Document ranking leaderboard in May 2021.
MDAPT Multilingual Domain Adaptive Pretraining in a Single Model ; Domain adaptive pretraining, i.e. the continued unsupervised pretraining of a language model on domainspecific text, improves the modelling of text for downstream tasks within the domain. Numerous realworld applications are based on domainspecific text, e.g. working with financial or biomedical documents, and these applications often need to support multiple languages. However, largescale domainspecific multilingual pretraining data for such scenarios can be difficult to obtain, due to regulations, legislation, or simply a lack of language and domainspecific text. One solution is to train a single multilingual model, taking advantage of the data available in as many languages as possible. In this work, we explore the benefits of domain adaptive pretraining with a focus on adapting to multiple languages within a specific domain. We propose different techniques to compose pretraining corpora that enable a language model to both become domainspecific and multilingual. Evaluation on nine domainspecific datasetsfor biomedical named entity recognition and financial sentence classificationcovering seven different languages show that a single multilingual domainspecific model can outperform the general multilingual model, and performs close to its monolingual counterpart. This finding holds across two different pretraining methods, adapterbased pretraining and full model pretraining.
Discretizationindependent surrogate modeling over complex geometries using hypernetworks and implicit representations ; Numerical solutions of partial differential equations PDEs require expensive simulations, limiting their application in design optimization, modelbased control, and largescale inverse problems. Surrogate modeling techniques seek to decrease the computational expense while retaining dominant solution features and behavior. Traditional Convolutional Neural Networkbased frameworks for surrogate modeling require lossy pixelization and datapreprocessing, and generally are not effective in realistic engineering applications. We propose alternative deeplearning based surrogate models for discretizationindependent, continuous representations of PDE solutions, which can be used for learning and prediction over domains with complex, variable geometry and mesh topology. Three methods are proposed and compared; designvariablecoded multilayer perceptron DVMLP, designvariable hypernetworks DVHnet, and nonlinear independent dual system NIDS. Each method utilizes a main network which consumes pointwise spatial information to provide a continuous representation, allowing predictions at any location in the domain. Input features include a minimumdistance function evaluation to implicitly encode the problem geometry. The geometric design variables, which define and distinguish problem instances, are used differently by each method, appearing as additional mainnetwork input features DVMLP, or as hypernetwork inputs DVHnet and NIDS. The methods are applied to predict solutions around complex, parametricallydefined geometries on nonparametricallydefined meshes with model predictions obtained many orders of magnitude faster than the full order models. Test cases include a vehicleaerodynamics problem with complex geometry and limited training data, with a designvariable hypernetwork performing best, with a competitive timetobestmodel despite a much greater parameter count.
Universal Adversarial Attack on Deep Learning Based Prognostics ; Deep learningbased time series models are being extensively utilized in engineering and manufacturing industries for process control and optimization, asset monitoring, diagnostic and predictive maintenance. These models have shown great improvement in the prediction of the remaining useful life RUL of industrial equipment but suffer from inherent vulnerability to adversarial attacks. These attacks can be easily exploited and can lead to catastrophic failure of critical industrial equipment. In general, different adversarial perturbations are computed for each instance of the input data. This is, however, difficult for the attacker to achieve in real time due to higher computational requirement and lack of uninterrupted access to the input data. Hence, we present the concept of universal adversarial perturbation, a special imperceptible noise to fool regression based RUL prediction models. Attackers can easily utilize universal adversarial perturbations for realtime attack since continuous access to input data and repetitive computation of adversarial perturbations are not a prerequisite for the same. We evaluate the effect of universal adversarial attacks using NASA turbofan engine dataset. We show that addition of universal adversarial perturbation to any instance of the input data increases error in the output predicted by the model. To the best of our knowledge, we are the first to study the effect of the universal adversarial perturbation on time series regression models. We further demonstrate the effect of varying the strength of perturbations on RUL prediction models and found that model accuracy decreases with the increase in perturbation strength of the universal adversarial attack. We also showcase that universal adversarial perturbation can be transferred across different models.
Recursively Summarizing Books with Human Feedback ; A major challenge for scaling machine learning is training models to perform tasks that are very difficult or timeconsuming for humans to evaluate. We present progress on this problem on the task of abstractive summarization of entire fiction novels. Our method combines learning from human feedback with recursive task decomposition we use models trained on smaller parts of the task to assist humans in giving feedback on the broader task. We collect a large volume of demonstrations and comparisons from human labelers, and finetune GPT3 using behavioral cloning and reward modeling to do summarization recursively. At inference time, the model first summarizes small sections of the book and then recursively summarizes these summaries to produce a summary of the entire book. Our human labelers are able to supervise and evaluate the models quickly, despite not having read the entire books themselves. Our resulting model generates sensible summaries of entire books, even matching the quality of humanwritten summaries in a few cases sim5 of books. We achieve stateoftheart results on the recent BookSum dataset for booklength summarization. A zeroshot questionanswering model using these summaries achieves stateoftheart results on the challenging NarrativeQA benchmark for answering questions about books and movie scripts. We release datasets of samples from our model.
Integrating Pattern and Factbased Fake News Detection via Model Preference Learning ; To defend against fake news, researchers have developed various methods based on texts. These methods can be grouped as 1 patternbased methods, which focus on shared patterns among fake news posts rather than the claim itself; and 2 factbased methods, which retrieve from external sources to verify the claim's veracity without considering patterns. The two groups of methods, which have different preferences of textual clues, actually play complementary roles in detecting fake news. However, few works consider their integration. In this paper, we study the problem of integrating pattern and factbased models into one framework via modeling their preference differences, i.e., making the pattern and factbased models focus on respective preferred parts in a post and mitigate interference from nonpreferred parts as possible. To this end, we build a Preferenceaware Fake News Detection Framework PrefFEND, which learns the respective preferences of pattern and factbased models for joint detection. We first design a heterogeneous dynamic graph convolutional network to generate the respective preference maps, and then use these maps to guide the joint learning of pattern and factbased models for final prediction. Experiments on two realworld datasets show that PrefFEND effectively captures model preferences and improves the performance of models based on patterns, facts, or both.
Bayesian ModelAveraged MetaAnalysis in Medicine ; We outline a Bayesian modelaveraged metaanalysis for standardized mean differences in order to quantify evidence for both treatment effectiveness delta and acrossstudy heterogeneity tau. We construct four competing models by orthogonally combining two presentabsent assumptions, one for the treatment effect and one for acrossstudy heterogeneity. To inform the choice of prior distributions for the model parameters, we used 50 of the Cochrane Database of Systematic Reviews to specify rival prior distributions for delta and tau. The relative predictive performance of the competing models and rival prior distributions was assessed using the remaining 50 of the Cochrane Database. On average, mathcalH1r the model that assumes the presence of a treatment effect as well as acrossstudy heterogeneity outpredicted the other models, but not by a large margin. Within mathcalH1r, predictive adequacy was relatively constant across the rival prior distributions. We propose specific empirical prior distributions, both for the field in general and for each of 46 specific medical subdisciplines. An example from oral health demonstrates how the proposed prior distributions can be used to conduct a Bayesian modelaveraged metaanalysis in the opensource software R and JASP. The preregistered analysis plan is available at httpsosf.iozs3df.
On an Extension of the Brownian Bridge with Applications in Finance ; The main purpose of this paper is to extend the informationbased assetpricing framework of BrodyHughstonMacrina to a more general setup. We include a wider class of models for market information and in contrast to the original paper, we consider a model in which a credit risky asset is modelled in the presence of a default time. Instead of using only a Brownian bridge as a noise, we consider another important type of noise. We model the flow of information about a default bond with given random repayments at a predetermined maturity date by the so called market information process, this process is the sum of two terms, namely the cash flow induced by the repayment at maturity and a noise, a stochastic process set up by adding a Brownian bridge with length equal to the maturity date and a drift, linear in time, multiplied by a time changed L'evy process. In this model the information concerning the random cashflow is modelled explicitly but the default time of the company is not since the payment is contractually set to take place at maturity only. We suggest a model, in which the cash flow and the time of bankruptcy are both modelled, which covers contracts, e.g. defaultable bonds, to be paid at hit. From a theoretical point of view, this paper deals with conditions, which allow to keep the Markov property, when we replace the pinning point in the Brownian bridge by a process. For this purpose, we first study the basic mathematical properties of a bridge between two Brownian motions.
Evaluation of Two Complementary Modeling Approaches for FiberReinforced Soft Actuators ; Roboticists have been seeking to address this situation in recent years through the use of soft robots. Unfortunately, identifying appropriate models for the complete analysis and investigation of soft robots for design and control purposes can be problematic. This paper seeks to address this challenge by proposing two complementary modeling techniques for a particular type of soft robotic actuator known as a FiberReinforced Elastomeric Enclosure FREE. We propose that researchers can leverage multiple models to fill gaps in the understanding of the behavior of soft robots. We present and evaluate both a dynamic, lumpedparameter model and a finite element model to extend understanding of the practicability of FREEs in soft robotic applications. The results with the lumpedparameter model demonstrate that it predicts the actual rotational motion of a FREE with at most 4 error when a closedloop controller is embedded in the system. Additionally, finite element analysis was used to study FREE design parameters as well as the workspace achieved with a module comprised of multiple FREEs. Our finite element results indicate that variations in the material properties of the elastic enclosure of a FREE are more significant than variations in fiber properties. Finally, finite element results show that a 30degree difference in winding angle dramatically alters the shape of the workspace generated by four FREEs assembled into a module. Concludingly, comments are made about the relative advantages and limitations of lumpedparameter and finite element models of FREEs and FREE modules in providing useful insights into their behavior.
Modelling Big, Heterogeneous, NonGaussian Spatial and SpatioTemporal Data using FRK ; NonGaussian spatial and spatiotemporal data are becoming increasingly prevalent, and their analysis is needed in a variety of disciplines. FRK is an R package for spatialspatiotemporal modelling and prediction with very large data sets that, to date, has only supported linear process models and Gaussian data models. In this paper, we describe a major upgrade to FRK that allows for nonGaussian data to be analysed in a generalised linear mixed model framework. These vastly more general spatial and spatiotemporal models are fitted using the Laplace approximation via the software TMB. The existing functionality of FRK is retained with this advance into nonGaussian models; in particular, it allows for automatic basisfunction construction, it can handle both pointreferenced and areal data simultaneously, and it can predict process values at any spatial support from these data. This new version of FRK also allows for the use of a large number of basis functions when modelling the spatial process, and is thus often able to achieve more accurate predictions than previous versions of the package in a Gaussian setting. We demonstrate innovative features in this new version of FRK, highlight its ease of use, and compare it to alternative packages using both simulated and real data sets.