text
stringlengths
62
2.94k
DistributionFree LocationScale Regression ; We introduce a generalized additive model for location, scale, and shape GAMLSS next of kin aiming at distributionfree and parsimonious regression modelling for arbitrary outcomes. We replace the strict parametric distribution formulating such a model by a transformation function, which in turn is estimated from data. Doing so not only makes the model distributionfree but also allows to limit the number of linear or smooth model terms to a pair of locationscale predictor functions. We derive the likelihood for continuous, discrete, and randomly censored observations, along with corresponding score functions. A plethora of existing algorithms is leveraged for model estimation, including constrained maximumlikelihood, the original GAMLSS algorithm, and transformation trees. Parameter interpretability in the resulting models is closely connected to model selection. We propose the application of a novel best subset selection procedure to achieve especially simple ways of interpretation. All techniques are motivated and illustrated by a collection of applications from different domains, including crossing and partial proportional hazards, complex count regression, nonlinear ordinal regression, and growth curves. All analyses are reproducible with the help of the tram addon package to the R system for statistical computing and graphics.
E Pluribus Unum Interpretable Convolutional Neural Networks ; The adoption of Convolutional Neural Network CNN models in highstake domains is hindered by their inability to meet society's demand for transparency in decisionmaking. So far, a growing number of methodologies have emerged for developing CNN models that are interpretable by design. However, such models are not capable of providing interpretations in accordance with human perception, while maintaining competent performance. In this paper, we tackle these challenges with a novel, general framework for instantiating inherently interpretable CNN models, named E Pluribus Unum Interpretable CNN EPUCNN. An EPUCNN model consists of CNN subnetworks, each of which receives a different representation of an input image expressing a perceptual feature, such as color or texture. The output of an EPUCNN model consists of the classification prediction and its interpretation, in terms of relative contributions of perceptual features in different regions of the input image. EPUCNN models have been extensively evaluated on various publicly available datasets, as well as a contributed benchmark dataset. Medical datasets are used to demonstrate the applicability of EPUCNN for risksensitive decisions in medicine. The experimental results indicate that EPUCNN models can achieve a comparable or better classification performance than other CNN architectures while providing humanly perceivable interpretations.
Safety and Performance, Why not Both BiObjective Optimized Model Compression toward AI Software Deployment ; The size of deep learning models in artificial intelligence AI software is increasing rapidly, which hinders the largescale deployment on resourcerestricted devices e.g., smartphones. To mitigate this issue, AI software compression plays a crucial role, which aims to compress model size while keeping high performance. However, the intrinsic defects in the big model may be inherited by the compressed one. Such defects may be easily leveraged by attackers, since the compressed models are usually deployed in a large number of devices without adequate protection. In this paper, we try to address the safe model compression problem from a safetyperformance cooptimization perspective. Specifically, inspired by the testdriven development TDD paradigm in software engineering, we propose a testdriven sparse training framework called SafeCompress. By simulating the attack mechanism as the safety test, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm. Further, considering a representative attack, i.e., membership inference attack MIA, we develop a concrete safe model compression mechanism, called MIASafeCompress. Extensive experiments are conducted to evaluate MIASafeCompress on five datasets for both computer vision and natural language processing tasks. The results verify the effectiveness and generalization of our method. We also discuss how to adapt SafeCompress to other attacks besides MIA, demonstrating the flexibility of SafeCompress.
Gibbs Phenomenon Suppression in PDEBased Statistical SpatioTemporal Models ; A class of physicsinformed spatiotemporal models has recently been proposed for modeling spatiotemporal processes governed by advectiondiffusion equations. The central idea is to approximate the process by a truncated Fourier series and let the governing physics determine the dynamics of the spectral coefficients. However, because many spatiotemporal processes in real applications are nonperiodic with boundary discontinuities, the wellknown Gibbs phenomenon and ripple artifact almost always exist in the outputs generated by such models due to truncation of the Fourier series. Hence, the key contribution of this paper is to propose a physicsinformed spatiotemporal modeling approach that significantly suppresses the Gibbs phenomenon when modeling spatiotemporal advectiondiffusion processes. The proposed approach starts with a data flipping procedure for the process respectively along the horizontal and vertical directions as if we were unfolding a piece of paper that has been folded twice along the two directions. Because the flipped process becomes spatially periodic and has a complete waveform without any boundary discontinuities, the Gibbs phenomenon disappears even if the Fourier series is truncated. Then, for the flipped process and given the Partial Differential Equation PDE that governs the process, this paper extends an existing PDEbased spatiotemporal model by obtaining the new temporal dynamics of the spectral coefficients, while maintaining the physical interpretation of the flipped process. Numerical investigations based on a real dataset have been performed to demonstrate the advantages of the proposed approach. It is found that the proposed approach effectively suppresses the Gibbs Phenomenon and significantly reduces the ripple artifact in modeling spatiotemporal advectiondiffusion processes. Computer code is available on GitHub.
Multimodal foundation models are better simulators of the human brain ; Multimodal learning, especially largescale multimodal pretraining, has developed rapidly over the past few years and led to the greatest advances in artificial intelligence AI. Despite its effectiveness, understanding the underlying mechanism of multimodal pretraining models still remains a grand challenge. Revealing the explainability of such models is likely to enable breakthroughs of novel learning paradigms in the AI field. To this end, given the multimodal nature of the human brain, we propose to explore the explainability of multimodal learning models with the aid of noninvasive brain imaging technologies such as functional magnetic resonance imaging fMRI. Concretely, we first present a newlydesigned multimodal foundation model pretrained on 15 million imagetext pairs, which has shown strong multimodal understanding and generalization abilities in a variety of cognitive downstream tasks. Further, from the perspective of neural encoding based on our foundation model, we find that both visual and lingual encoders trained multimodally are more brainlike compared with unimodal ones. Particularly, we identify a number of brain regions where multimodallytrained encoders demonstrate better neural encoding performance. This is consistent with the findings in existing studies on exploring brain multisensory integration. Therefore, we believe that multimodal foundation models are more suitable tools for neuroscientists to study the multimodal signal processing mechanisms in the human brain. Our findings also demonstrate the potential of multimodal foundation models as ideal computational simulators to promote both AIforbrain and brainforAI research.
Packet Forwarding with a Locally Bursty Adversary ; We consider packet forwarding in the adversarial queueing theory AQT model introduced by Borodin et al. We introduce a refinement of the AQT rho, sigmabounded adversary, which we call a emphlocally bursty adversary LBA that parameterizes injection patterns jointly by edge utilization and packet origin. For constant O1 parameters, the LBA model is strictly more permissive than the rho, sigma model. For example, there are injection patterns in the LBA model with constant parameters that can only be realized as rho, sigmabounded injection patterns with rho sigma Omegan where n is the network size. We show that the LBA model unlike the rho, sigma model is closed under packet bundling and discretization operations. Thus, the LBA model allows one to reduce the study of general uniform capacity networks and inhomogenous packet sizes to unit capacity networks with homogeneous packets. On the algorithmic side, we focus on information gathering networks i.e., networks in which all packets share a common destination, and the union of packet routes forms a tree. We show that the OddEven Downhill OED forwarding protocol described independently by Dobrev et al. and PattShamir and Rosenbaum achieves buffer space usage of Olog n against all LBAs with constant parameters. OED is a local protocol, but we show that the upper bound is tight even when compared to centralized protocols. Our lower bound for the LBA model is in contrast to the rho, sigmamodel, where centralized protocols can achieve worstcase buffer space usage O1 for rho, sigma O1, while the Olog n upper bound for OED is optimal only for local protocols.
Learning to predict test effectiveness ; The high cost of the test can be dramatically reduced, provided that the coverability as an inherent feature of the code under test is predictable. This article offers a machine learning model to predict the extent to which the test could cover a class in terms of a new metric called Coverageability. The prediction model consists of an ensemble of four regression models. The learning samples consist of feature vectors, where features are source code metrics computed for a class. The samples are labeled by the Coverageability values computed for their corresponding classes. We offer a mathematical model to evaluate test effectiveness in terms of size and coverage of the test suite generated automatically for each class. We extend the size of the feature space by introducing a new approach to defining submetrics in terms of existing source code metrics. Using feature importance analysis on the learned prediction models, we sort source code metrics in the order of their impact on the test effectiveness. As a result of which, we found the class strict cyclomatic complexity as the most influential source code metric. Our experiments with the prediction models on a large corpus of Java projects containing about 23,000 classes demonstrate the Mean Absolute Error MAE of 0.032, Mean Squared Error MSE of 0.004, and an R2score of 0.855. Compared with the stateoftheart coverage prediction models, our models improve MAE, MSE, and an R2score by 5.78, 2.84, and 20.71, respectively.
I Know What You Do Not Know Knowledge Graph Embedding via Codistillation Learning ; Knowledge graph KG embedding seeks to learn vector representations for entities and relations. Conventional models reason over graph structures, but they suffer from the issues of graph incompleteness and longtail entities. Recent studies have used pretrained language models to learn embeddings based on the textual information of entities and relations, but they cannot take advantage of graph structures. In the paper, we show empirically that these two kinds of features are complementary for KG embedding. To this end, we propose CoLE, a Codistillation Learning method for KG Embedding that exploits the complementarity of graph structures and text information. Its graph embedding model employs Transformer to reconstruct the representation of an entity from its neighborhood subgraph. Its text embedding model uses a pretrained language model to generate entity representations from the soft prompts of their names, descriptions, and relational neighbors. To let the two model promote each other, we propose codistillation learning that allows them to distill selective knowledge from each other's prediction logits. In our codistillation learning, each model serves as both a teacher and a student. Experiments on benchmark datasets demonstrate that the two models outperform their related baselines, and the ensemble method CoLE with codistillation learning advances the stateoftheart of KG embedding.
Towards standard imsets for maximal ancestral graphs ; The imsets of Studen'y 2005 are an algebraic method for representing conditional independence models. They have many attractive properties when applied to such models, and they are particularly nice for working with directed acyclic graph DAG models. In particular, the 'standard' imset for a DAG is in onetoone correspondence with the independences it induces, and hence is a label for its Markov equivalence class. We first present a proposed extension to standard imsets for maximal ancestral graph MAG models, using the parameterizing set representation of Hu and Evans 2020. In these cases the imset provides a scoring criteria by measuring the discrepancy for a list of independences that define the model; this gives an alternative to the usual BIC score that is also consistent, and much easier to compute. We also show that, of independence models that do represent the MAG, the imset we give is minimal. Unfortunately, for some graphs the representation does not represent all the independences in the model, and in certain cases does not represent any at all. For these general MAGs, we refine the reduced ordered local Markov property Richardson 2003 by a novel graphical tool called power DAGs, and this results in an imset that induces the correct model and which, under a mild condition, can be constructed in polynomial time.
Chaotic heteroclinic networks as models of switching behavior in biological systems ; Key features of biological activity can often be captured by transitions between a finite number of semistable states that correspond to behaviors or decisions. We present here a broad class of dynamical systems that are ideal for modeling such activity. The models we propose are chaotic heteroclinic networks with nontrivial intersections of stable and unstable manifolds. Due to the sensitive dependence on initial conditions, transitions between states are seemingly random. Dwell times, exit distributions, and other transition statistics can be built into the model through geometric design and can be controlled by tunable parameters. To test our model's ability to simulate realistic biological phenomena, we turned to one of the most studied organisms, it C. elegans, well known for its limited behavioral states. We reconstructed experimental data from two laboratories, demonstrating the model's ability to quantitatively reproduce dwell times and transition statistics under a variety of conditions. Stochastic switching between dominant states in complex dynamical systems has been extensively studied and is often modeled as Markov chains. As an alternative, we propose here a new paradigm, namely chaotic heteroclinic networks generated by deterministic rules without the necessity for noise. Chaotic heteroclinic networks can be used to model systems with arbitrary architecture and size without a commensurate increase in phase dimension. They are highly flexible and able to capture a wide range of transition characteristics that can be adjusted through control parameters.
Modeling the induction, thrust, and power of a yaw misaligned actuator disk ; Collective wind farm flow control, where wind turbines are operated in an individually suboptimal strategy to benefit the aggregate farm, has demonstrated potential to reduce wake interactions and increase farm energy production. However, existing wake models used for flow control often estimate the thrust and power of yaw misaligned turbines using simplified empirical expressions which require expensive calibration data and do not accurately extrapolate between turbine models. The thrust, wake velocity deficit, wake deflection, and power of a yawed wind turbine depend on its induced velocity. Here, we extend classical onedimensional momentum theory to model the induction of a yaw misaligned actuator disk. Analytical expressions for the induction, thrust, initial wake velocities, and power are developed as a function of the yaw angle and thrust coefficient. The analytical model is validated against large eddy simulations of a yawed actuator disk. Because the induction depends on the yaw and thrust coefficient, the power generated by a yawed actuator disk will always be greater than a cos3gamma model suggests, where gamma is yaw. The power lost by yaw depends on the thrust coefficient. An analytical expression for the thrust coefficient that maximizes power, depending on the yaw, is developed and validated. Finally, using the developed induction model as an initial condition for a turbulent farwake model, we demonstrate how combining wake steering and thrust induction control can increase array power, compared to either independent steering or induction control, due to the joint dependence of the induction on the thrust coefficient and yaw angle.
Joint Speaker Encoder and Neural Backend Model for Fully EndtoEnd Automatic Speaker Verification with Multiple Enrollment Utterances ; Conventional automatic speaker verification systems can usually be decomposed into a frontend model such as time delay neural network TDNN for extracting speaker embeddings and a backend model such as statisticsbased probabilistic linear discriminant analysis PLDA or neural networkbased neural PLDA NPLDA for similarity scoring. However, the sequential optimization of the frontend and backend models may lead to a local minimum, which theoretically prevents the whole system from achieving the best optimization. Although some methods have been proposed for jointly optimizing the two models, such as the generalized endtoend GE2E model and NPLDA E2E model, all of these methods are designed for use with a single enrollment utterance. In this paper, we propose a new E2E joint method for speaker verification especially designed for the practical case of multiple enrollment utterances. In order to leverage the intrarelationship among multiple enrollment utterances, our model comes equipped with framelevel and utterancelevel attention mechanisms. We also utilize several data augmentation techniques, including conventional noise augmentation using MUSAN and RIRs datasets and a unique speaker embeddinglevel mixup strategy for better optimization.
How Slowly can the Early Universe Expand ; When the expansion of the universe is dominated by a perfect fluid with equation of state parameter w and a sound speed cs satisfying w cs2 le 1, the Hubble parameter H and time t satisfy the bound Ht ge 13. There has been recent interest in ultraslow expansion laws with Ht 13 sometimes described as fast expanding models. We examine various models that can produce ultraslow expansion scalar fields with negative potentials, barotropic fluids, braneworld models, or a loitering phase in the early universe. Scalar field models and barotropic models for ultraslow expansion are unstable to evolution toward w 1 or w rightarrow infty in the former case and w rightarrow infty in the latter case. Braneworld models can yield ultraslow expansion but require an expansion law beyond the standard Friedman equation. Loitering early universe models can produce a quasistatic expansion phase in the early universe but require an exotic negativedensity component. These results suggest that appeals to an ultraslow expansion phase in the early universe should be approached with some caution, although the loitering early universe may be worthy of further investigation. These results do not apply to ultraslow contracting models.
SSLWM A BlackBox Watermarking Approach for Encoders Pretrained by Selfsupervised Learning ; Recent years have witnessed significant success in SelfSupervised Learning SSL, which facilitates various downstream tasks. However, attackers may steal such SSL models and commercialize them for profit, making it crucial to protect their Intellectual Property IP. Most existing IP protection solutions are designed for supervised learning models and cannot be used directly since they require that the models' downstream tasks and target labels be known and available during watermark embedding, which is not always possible in the domain of SSL. To address such a problem especially when downstream tasks are diverse and unknown during watermark embedding, we propose a novel blackbox watermarking solution, named SSLWM, for protecting the ownership of SSL models. SSLWM maps watermarked inputs by the watermarked encoders into an invariant representation space, which causes any downstream classifiers to produce expected behavior, thus allowing the detection of embedded watermarks. We evaluate SSLWM on numerous tasks, such as Computer Vision CV and Natural Language Processing NLP, using different SSL models, including contrastivebased and generativebased. Experimental results demonstrate that SSLWM can effectively verify the ownership of stolen SSL models in various downstream tasks. Furthermore, SSLWM is robust against model finetuning and pruning attacks. Lastly, SSLWM can also evade detection from evaluated watermark detection approaches, demonstrating its promising application in protecting the IP of SSL models.
TruVR Trustworthy Cybersickness Detection using Explainable Machine Learning ; Cybersickness can be characterized by nausea, vertigo, headache, eye strain, and other discomforts when using virtual reality VR systems. The previously reported machine learning ML and deep learning DL algorithms for detecting classification and predicting regression VR cybersickness use blackbox models; thus, they lack explainability. Moreover, VR sensors generate a massive amount of data, resulting in complex and large models. Therefore, having inherent explainability in cybersickness detection models can significantly improve the model's trustworthiness and provide insight into why and how the MLDL model arrived at a specific decision. To address this issue, we present three explainable machine learning xML models to detect and predict cybersickness 1 explainable boosting machine EBM, 2 decision tree DT, and 3 logistic regression LR. We evaluate xMLbased models with publicly available physiological and gameplay datasets for cybersickness. The results show that the EBM can detect cybersickness with an accuracy of 99.75 and 94.10 for the physiological and gameplay datasets, respectively. On the other hand, while predicting the cybersickness, EBM resulted in a Root Mean Square Error RMSE of 0.071 for the physiological dataset and 0.27 for the gameplay dataset. Furthermore, the EBMbased global explanation reveals exposure length, rotation, and acceleration as key features causing cybersickness in the gameplay dataset. In contrast, galvanic skin responses and heart rate are most significant in the physiological dataset. Our results also suggest that EBMbased local explanation can identify cybersicknesscausing factors for individual samples. We believe the proposed xMLbased cybersickness detection method can help future researchers understand, analyze, and design simpler cybersickness detection and reduction models.
DOMINO Domainaware Model Calibration in Medical Image Segmentation ; Model calibration measures the agreement between the predicted probability estimates and the true correctness likelihood. Proper model calibration is vital for highrisk applications. Unfortunately, modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability. Medical image segmentation particularly suffers from this due to the natural uncertainty of tissue boundaries. This is exasperated by their loss functions, which favor overconfidence in the majority classes. We address these challenges with DOMINO, a domainaware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels. Our experiments demonstrate that our DOMINOcalibrated deep neural networks outperform noncalibrated models and stateoftheart morphometric methods in head image segmentation. Our results show that our method can consistently achieve better calibration, higher accuracy, and faster inference times than these methods, especially on rarer classes. This performance is attributed to our domainaware regularization to inform semantic model calibration. These findings show the importance of semantic ties between class labels in building confidence in deep learning models. The framework has the potential to improve the trustworthiness and reliability of generic medical image segmentation models. The code for this article is available at httpsgithub.comlabsmileDOMINO.
Cosmic jerk parameter in symmetric teleparallel cosmology ; In this paper, we have examined the recently proposed modified symmetric teleparallel gravity, in which gravitational Lagrangian is given by an arbitrary function of nonmetricity scalar Q. We have considered a constant jerk parameter to express the Hubble rate. Moreover, we have used 31 points of OHD datasets and 1701 points of Pantheon datasets to constraint our model parameters by means of the Markov Chain Monte Carlo analysis. The mean values and the best fit obtained give a consistent Hubble rate and deceleration parameter compared to the observation values. In order to study the current accelerated expansion scenario of the Universe with the presence of the cosmological fluid as a perfect fluid, we have considered two forms of teleparallel gravity. We have studied the obtained field equations with the proposed forms of fQ models, specifically, linear fleft Qright alpha Qbeta and nonlinear fleft Qright QmQn models. Next, we have discussed the physical behavior of cosmological parameters such as energy density, pressure, EoS parameter, and deceleration parameter for both model. To ensure the validity of our proposed cosmological models, we have checked all energy conditions. The properties of these parameters confirm that our models describe the current acceleration of the expansion of the Universe. This result is also corroborated by the energy conditions criteria. the Finally, the EoS parameter for both models indicates that the cosmological fluid behaves like a quintessence dark energy model.
A hyperparameterization method for comprehensive ocean models Advection of the image point ; Idealized and comprehensive ocean models at low resolutions cannot reproduce nominallyresolved flow structures similar to those presented in the highresolution solution. Although there are various underlying physical reasons for this, from the dynamical system point of view all these reasons manifest themselves as a lowresolution trajectory avoiding the phase space occupied by the reference solution the highresolution solution projected onto the coarse grid. In order to solve this problem, a set of hyperparameterization methods has recently been proposed and successfully tested on idealized ocean models. In this work, for the first time we apply one of hyperparameterization methods Advection of the image point to a comprehensive, rather than idealized, general circulation model of the North Atlantic. The results show that the hyperparameterization method significantly improves a noneddyresolving solution towards the reference eddyresolving solution by reproducing both the large and smallscale features of the Gulf Stream flow. The proposed method is much faster than even a single run of the coarsegrid ocean model, requires no modification of the model, and is easy to implement. Moreover, the method can take not only the reference solution as input data but also real measurements from different sources drifters, weather stations, etc., or combination of both. All this offers a great flexibility to ocean modellers working with mathematical models andor measurements.
A First Application of Collaborative Learning In Particle Physics ; Over the last ten years, the popularity of Machine Learning ML has grown exponentially in all scientific fields, including particle physics. The industry has also developed new powerful tools that, imported into academia, could revolutionise research. One recent industry development that has not yet come to the attention of the particle physics community is Collaborative Learning CL, a framework that allows training the same ML model with different datasets. This work explores the potential of CL, testing the library Colearn with neutrino physics simulation. Colearn, developed by the British Cambridgebased firm Fetch.AI, enables decentralised machine learning tasks. Being a blockchainmediated CL system, it allows multiple stakeholders to build a shared ML model without needing to rely on a central authority. A generic Liquid Argon TimeProjection Chamber LArTPC has been simulated and images produced by fictitious neutrino interactions have been used to produce several datasets. These datasets, called learners, participated successfully in training a Deep Learning DL Keras model using blockchain technologies in a decentralised way. This test explores the feasibility of training a single ML model using different simulation datasets coming from different research groups. In this work, we also discuss a framework that instead makes different ML models compete against each other on the same dataset. The final goal is then to train the most performant ML model across the entire scientific community for a given experiment, either using all of the datasets available or selecting the model which performs best among every model developed in the community.
The Geometry of Selfsupervised Learning Models and its Impact on Transfer Learning ; Selfsupervised learning SSL has emerged as a desirable paradigm in computer vision due to the inability of supervised models to learn representations that can generalize in domains with limited labels. The recent popularity of SSL has led to the development of several models that make use of diverse training strategies, architectures, and data augmentation policies with no existing unified framework to study or assess their effectiveness in transfer learning. We propose a datadriven geometric strategy to analyze different SSL models using local neighborhoods in the feature space induced by each. Unlike existing approaches that consider mathematical approximations of the parameters, individual components, or optimization landscape, our work aims to explore the geometric properties of the representation manifolds learned by SSL models. Our proposed manifold graph metrics MGMs provide insights into the geometric similarities and differences between available SSL models, their invariances with respect to specific augmentations, and their performances on transfer learning tasks. Our key findings are two fold i contrary to popular belief, the geometry of SSL models is not tied to its training paradigm contrastive, noncontrastive, and clusterbased; ii we can predict the transfer learning capability for a specific model based on the geometric properties of its semantic and augmentation manifolds.
Multiscale modeling of solute diffusion in triblock copolymer membranes ; We develop a multiscale simulation model for diffusion of solutes through porous triblock copolymer membranes. The approach combines two techniques selfconsistent field theory SCFT to predict the structure of the selfassembled, solvated membrane and onlattice kinetic Monte Carlo kMC simulations to model diffusion of solutes. Solvation is simulated in SCFT by constraining the glassy membrane matrix while relaxing the brushlike membrane pore coating against the solvent. The kMC simulations capture the resulting solute spatial distribution and concentrationdependent local diffusivity in the polymercoated pores; we parameterize the latter using particlebased simulations. We apply our approach to simulate solute diffusion through nonequilibrium morphologies of a model triblock copolymer, and we correlate diffusivity with structural descriptors of the morphologies. We also compare the model's predictions to alternative approaches based on simple lattice random walks and find our multiscale model to be more robust and systematic to parameterize. Our multiscale modeling approach is general and can be readily extended in the future to other chemistries, morphologies, and models for the local solute diffusivity and interactions with the membrane.
Federated Learning from PreTrained Models A Contrastive Learning Approach ; Federated Learning FL is a machine learning paradigm that allows decentralized clients to learn collaboratively without sharing their private data. However, excessive computation and communication demands pose challenges to current FL frameworks, especially when training largescale models. To prevent these issues from hindering the deployment of FL systems, we propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pretrained models rather than training a largescale model from scratch. This leads us to a more practical FL problem by considering how to capture more clientspecific and classrelevant information from the pretrained models and jointly improve each client's ability to exploit those offtheshelf models. In this work, we design a Federated Prototypewise Contrastive Learning FedPCL approach which shares knowledge across clients through their class prototypes and builds clientspecific representations in a prototypewise contrastive manner. Sharing prototypes rather than learnable model parameters allows each client to fuse the representations in a personalized way while keeping the shared knowledge in a compact form for efficient communication. We perform a thorough evaluation of the proposed FedPCL in the lightweight framework, measuring and visualizing its ability to fuse various pretrained models on popular FL datasets.
Uncertaintyaware Perception Models for Offroad Autonomous Unmanned Ground Vehicles ; Offroad autonomous unmanned ground vehicles UGVs are being developed for military and commercial use to deliver crucial supplies in remote locations, help with mapping and surveillance, and to assist warfighters in contested environments. Due to complexity of the offroad environments and variability in terrain, lighting conditions, diurnal and seasonal changes, the models used to perceive the environment must handle a lot of input variability. Current datasets used to train perception models for offroad autonomous navigation lack of diversity in seasons, locations, semantic classes, as well as time of day. We test the hypothesis that model trained on a single dataset may not generalize to other offroad navigation datasets and new locations due to the input distribution drift. Additionally, we investigate how to combine multiple datasets to train a semantic segmentationbased environment perception model and we show that training the model to capture uncertainty could improve the model performance by a significant margin. We extend the Masksembles approach for uncertainty quantification to the semantic segmentation task and compare it with Monte Carlo Dropout and standard baselines. Finally, we test the approach against data collected from a UGV platform in a new testing environment. We show that the developed perception model with uncertainty quantification can be feasibly deployed on an UGV to support online perception and navigation tasks.
Quantileconstrained Wasserstein projections for robust interpretability of numerical and machine learning models ; Robustness studies of blackbox models is recognized as a necessary task for numerical models based on structural equations and predictive models learned from data. These studies must assess the model's robustness to possible misspecification of regarding its inputs e.g., covariate shift. The study of blackbox models, through the prism of uncertainty quantification UQ, is often based on sensitivity analysis involving a probabilistic structure imposed on the inputs, while ML models are solely constructed from observed data. Our work aim at unifying the UQ and ML interpretability approaches, by providing relevant and easytouse tools for both paradigms. To provide a generic and understandable framework for robustness studies, we define perturbations of input information relying on quantile constraints and projections with respect to the Wasserstein distance between probability measures, while preserving their dependence structure. We show that this perturbation problem can be analytically solved. Ensuring regularity constraints by means of isotonic polynomial approximations leads to smoother perturbations, which can be more suitable in practice. Numerical experiments on real case studies, from the UQ and ML fields, highlight the computational feasibility of such studies and provide local and global insights on the robustness of blackbox models to input perturbations.
Solving Seismic Wave Equations on Variable Velocity Models with Fourier Neural Operator ; In the study of subsurface seismic imaging, solving the acoustic wave equation is a pivotal component in existing models. The advancement of deep learning enables solving partial differential equations, including wave equations, by applying neural networks to identify the mapping between the inputs and the solution. This approach can be faster than traditional numerical methods when numerous instances are to be solved. Previous works that concentrate on solving the wave equation by neural networks consider either a single velocity model or multiple simple velocity models, which is restricted in practice. Instead, inspired by the idea of operator learning, this work leverages the Fourier neural operator FNO to effectively learn the frequency domain seismic wavefields under the context of variable velocity models. We also propose a new framework paralleled Fourier neural operator PFNO for efficiently training the FNObased solver given multiple source locations and frequencies. Numerical experiments demonstrate the high accuracy of both FNO and PFNO with complicated velocity models in the OpenFWI datasets. Furthermore, the crossdataset generalization test verifies that PFNO adapts to outofdistribution velocity models. Moreover, PFNO has robust performance in the presence of random noise in the labels. Finally, PFNO admits higher computational efficiency on largescale testing datasets than the traditional finitedifference method. The aforementioned advantages endow the FNObased solver with the potential to build powerful models for research on seismic waves.
Spotlight Mobile UI Understanding using VisionLanguage Models with a Focus ; Mobile UI understanding is important for enabling various interaction tasks such as UI automation and accessibility. Previous mobile UI modeling often depends on the view hierarchy information of a screen, which directly provides the structural data of the UI, with the hope to bypass challenging tasks of visual modeling from screen pixels. However, view hierarchies are not always available, and are often corrupted with missing object descriptions or misaligned structure information. As a result, despite the use of view hierarchies could offer shortterm gains, it may ultimately hinder the applicability and performance of the model. In this paper, we propose Spotlight, a visiononly approach for mobile UI understanding. Specifically, we enhance a visionlanguage model that only takes the screenshot of the UI and a region of interest on the screen the focus as the input. This general architecture of Spotlight is easily scalable and capable of performing a range of UI modeling tasks. Our experiments show that our model establishes SoTA results on several representative UI tasks and outperforms previous methods that use both screenshots and view hierarchies as inputs. Furthermore, we explore multitask learning and fewshot prompting capacities of the proposed models, demonstrating promising results in the multitask learning direction.
Solar Power Time Series Forecasting Utilising Wavelet Coefficients ; Accurate and reliable prediction of Photovoltaic PV power output is critical to electricity grid stability and power dispatching capabilities. However, Photovoltaic PV power generation is highly volatile and unstable due to different reasons. The Wavelet Transform WT has been utilised in time series applications, such as Photovoltaic PV power prediction, to model the stochastic volatility and reduce prediction errors. Yet the existing Wavelet Transform WT approach has a limitation in terms of time complexity. It requires reconstructing the decomposed components and modelling them separately and thus needs more time for reconstruction, model configuration and training. The aim of this study is to improve the efficiency of applying Wavelet Transform WT by proposing a new method that uses a single simplified model. Given a time series and its Wavelet Transform WT coefficients, it trains one model with the coefficients as features and the original time series as labels. This eliminates the need for component reconstruction and training numerous models. This work contributes to the dayahead aggregated solar Photovoltaic PV power time series prediction problem by proposing and comprehensively evaluating a new approach of employing WT. The proposed approach is evaluated using 17 months of aggregated solar Photovoltaic PV power data from two realworld datasets. The evaluation includes the use of a variety of prediction models, including Linear Regression, Random Forest, Support Vector Regression, and Convolutional Neural Networks. The results indicate that using a coefficientsbased strategy can give predictions that are comparable to those obtained using the componentsbased approach while requiring fewer models and less computational time.
Quark A GradientFree Quantum Learning Framework for Classification Tasks ; As more practical and scalable quantum computers emerge, much attention has been focused on realizing quantum supremacy in machine learning. Existing quantum ML methods either 1 embed a classical model into a target Hamiltonian to enable quantum optimization or 2 represent a quantum model using variational quantum circuits and apply classical gradientbased optimization. The former method leverages the power of quantum optimization but only supports simple ML models, while the latter provides flexibility in model design but relies on gradient calculation, resulting in barren plateau i.e., gradient vanishing and frequent classicalquantum interactions. To address the limitations of existing quantum ML methods, we introduce Quark, a gradientfree quantum learning framework that optimizes quantum ML models using quantum optimization. Quark does not rely on gradient computation and therefore avoids barren plateau and frequent classicalquantum interactions. In addition, Quark can support more general ML models than prior quantum ML methods and achieves a datasetsizeindependent optimization complexity. Theoretically, we prove that Quark can outperform classical gradientbased methods by reducing model query complexity for highly nonconvex problems; empirically, evaluations on the Edge Detection and TinyMNIST tasks show that Quark can support complex ML models and significantly reduce the number of measurements needed for discovering nearoptimal weights for these tasks.
An Interpretable Machine Learning Framework for Modeling HighResolution Spectroscopic Data ; Comparison of echelle spectra to synthetic models has become a computational statistics challenge, with over ten thousand individual spectral lines affecting a typical cool star echelle spectrum. Telluric artifacts, imperfect line lists, inexact continuum placement, and inflexible models frustrate the scientific promise of these informationrich datasets. Here we debut an interpretable machinelearning framework blas'e that addresses these and other challenges. The semiempirical approach can be viewed as transfer learning first pretraining models on noisefree precomputed synthetic spectral models, then learning the corrections to line depths and widths from wholespectrum fitting to an observed spectrum. The autodifferentiable model employs backpropagation, the fundamental algorithm empowering modern Deep Learning and Neural Networks. Here, however, the 40,000 parameters symbolize physically interpretable line profile properties such as amplitude, width, location, and shape, plus radial velocity and rotational broadening. This hybrid datamodel driven framework allows joint modeling of stellar and telluric lines simultaneously, a potentially transformative step forwards for mitigating the deleterious telluric contamination in the nearinfrared. The blas'e approach acts as both a deconvolution tool and semiempirical model. The general purpose scaffolding may be extensible to many scientific applications, including precision radial velocities, Doppler imaging, chemical abundances, and remote sensing. Its sparsematrix architecture and GPUacceleration make blas'e fast. The opensource PyTorchbased code includes tutorials, Application Programming Interface API documentation, and more. We show how the tool fits into the existing Python spectroscopy ecosystem, demonstrate a range of astrophysical applications, and discuss limitations and future extensions.
A Stochastic Differential Equation Model for PredatorAvoidance Fish Schooling ; This paper presents a system of stochastic differential equations SDEs as mathematical model to describe the spatialtemporal dynamics of predatorprey system in an artificial aquatic environment with schooling behavior imposed upon the associated prey. The proposed model follows the particlelike approach where interactions among the associated units are manifested through combination of attractive and repulsive forces analogous to the ones occurred in molecular physics. Two hunting tactics of the predator are proposed and integrated into the general model, namely the centerattacking and the nearestattacking strategy. Emphasis is placed upon demonstrating the capacity of the proposed model in i discovering the predatoravoidance patterns of the schooling prey, and ii showing the benefit of constituting large prey school in better escaping the predator's attack. Based on numerical simulations upon the proposed model, four predatoravoidance patterns of the schooling prey are discovered, namely Split and Reunion, Split and Separate into Two Groups, Scattered, and Maintain Formation and Distance. The proposed model also successfully demonstrates the benefit of constituting large group of schooling prey in mitigating predation risk. Such findings are in agreement with reallife observations of the natural aquatic ecosystem, hence confirming the validity and exactitude of the proposed model.
Shortterm prediction of stream turbidity using surrogate data and a metamodel approach ; Many waterquality monitoring programs aim to measure turbidity to help guide effective management of waterways and catchments, yet distributing turbidity sensors throughout networks is typically cost prohibitive. To this end, we built and compared the ability of dynamic regression ARIMA, long shortterm memory neural nets LSTM, and generalized additive models GAM to forecast stream turbidity one step ahead, using surrogate data from relatively lowcost insitu sensors and publicly available databases. We iteratively trialled combinations of four surrogate covariates rainfall, water level, air temperature and total global solar exposure selecting a final model for each type that minimised the corrected Akaike Information Criterion. Crossvalidation using a rolling timewindow indicated that ARIMA, which included the rainfall and waterlevel covariates only, produced the most accurate predictions, followed closely by GAM, which included all four covariates. We constructed a metamodel, trained on timeseries features of turbidity, to take advantage of the strengths of each model over different time points and predict the best model that with the lowest forecast error onestep prior for each time step. The metamodel outperformed all other models, indicating that this methodology can yield high accuracy and may be a viable alternative to using measurements sourced directly from turbiditysensors where costs prohibit their deployment and maintenance, and when predicting turbidity across the short term. Our findings also indicated that temperature and lightassociated variables, for example underwater illuminance, may hold promise as costeffective, highfrequency surrogates of turbidity, especially when combined with other covariates, like rainfall, that are typically measured at coarse levels of spatial resolution.
MiniALBERT Model Distillation via ParameterEfficient Recursive Transformers ; Pretrained Language Models LMs have become an integral part of Natural Language Processing NLP in recent years, due to their superior performance in downstream applications. In spite of this resounding success, the usability of LMs is constrained by computational and time complexity, along with their increasing size; an issue that has been referred to as overparameterisation'. Different strategies have been proposed in the literature to alleviate these problems, with the aim to create effective compact models that nearly match the performance of their bloated counterparts with negligible performance losses. One of the most popular techniques in this area of research is model distillation. Another potent but underutilised technique is crosslayer parameter sharing. In this work, we combine these two strategies and present MiniALBERT, a technique for converting the knowledge of fully parameterised LMs such as BERT into a compact recursive student. In addition, we investigate the application of bottleneck adapters for layerwise adaptation of our recursive student, and also explore the efficacy of adapter tuning for finetuning of compact models. We test our proposed models on a number of general and biomedical NLP tasks to demonstrate their viability and compare them with the stateoftheart and other existing compact models. All the codes used in the experiments are available at httpsgithub.comnlpieresearchMiniALBERT. Our pretrained compact models can be accessed from httpshuggingface.conlpie.
When does deep learning fail and how to tackle it A critical analysis on polymer sequenceproperty surrogate models ; Deep learning models are gaining popularity and potency in predicting polymer properties. These models can be built using preexisting data and are useful for the rapid prediction of polymer properties. However, the performance of a deep learning model is intricately connected to its topology and the volume of training data. There is no facile protocol available to select a deep learning architecture, and there is a lack of a large volume of homogeneous sequenceproperty data of polymers. These two factors are the primary bottleneck for the efficient development of deep learning models. Here we assess the severity of these factors and propose new algorithms to address them. We show that a linear layerbylayer expansion of a neural network can help in identifying the best neural network topology for a given problem. Moreover, we map the discrete sequence space of a polymer to a continuous onedimensional latent space using a machine learning pipeline to identify minimal data points for building a universal deep learning model. We implement these approaches for three representative cases of building sequenceproperty surrogate models, viz., the singlemolecule radius of gyration of a copolymer, adhesive free energy of a copolymer, and copolymer compatibilizer, demonstrating the generality of the proposed strategies. This work establishes efficient methods for building universal deep learning models with minimal data and hyperparameters for predicting sequencedefined properties of polymers.
ParameterEfficient Masking Networks ; A deeper network structure generally handles more complicated nonlinearity and performs more competitively. Nowadays, advanced network designs often contain a large number of repetitive structures e.g., Transformer. They empower the network capacity to a new level but also increase the model size inevitably, which is unfriendly to either model restoring or transferring. In this study, we are the first to investigate the representative potential of fixed random weights with limited unique values by learning diverse masks and introduce the ParameterEfficient Masking Networks PEMN. It also naturally leads to a new paradigm for model compression to diminish the model size. Concretely, motivated by the repetitive structures in modern neural networks, we utilize one random initialized layer, accompanied with different masks, to convey different feature mappings and represent repetitive network modules. Therefore, the model can be expressed as textitonelayer with a bunch of masks, which significantly reduce the model storage cost. Furthermore, we enhance our strategy by learning masks for a model filled by padding a given random weights vector. In this way, our method can further lower the space complexity, especially for models without many repetitive architectures. We validate the potential of PEMN learning masks on random weights with limited unique values and test its effectiveness for a new compression paradigm based on different network architectures. Code is available at httpsgithub.comyueb17PEMN
Machine Learning Approach for Predicting Students Academic Performance and Study Strategies based on their Motivation ; This research aims to develop machine learning models for students academic performance and study strategies prediction which could be generalized to all courses in higher education. Key learning attributes intrinsic, extrinsic, autonomy, relatedness, competence, and selfesteem essential for students learning process were used in building the models. Determining the broad effect of these attributes on students' academic performance and study strategy is the center of our interest. To investigate this, we used Scikitlearn in python to build five machine learning models Decision Tree, KNearest Neighbour, Random Forest, LinearLogistic Regression, and Support Vector Machine for both regression and classification tasks to perform our analysis. The models were trained, evaluated, and tested for accuracy using 924 university dentistry students' data collected by Chilean authors through quantitative research design. A comparative analysis of the models revealed that the treebased models such as the random forest with prediction accuracy of 94.9 and decision tree show the best results compared to the linear, support vector, and knearest neighbours. The models built in this research can be used in predicting student performance and study strategy so that appropriate interventions could be implemented to improve student learning progress. Thus, incorporating strategies that could improve diverse student learning attributes in the design of online educational systems may increase the likelihood of students continuing with their learning tasks as required. Moreover, the results show that the attributes could be modelled together and used to adaptpersonalize the learning process.
Reducing climate risk in energy system planning a posteriori time series aggregation for models with storage ; The growth in variable renewables such as solar and wind is increasing the impact of climate uncertainty in energy system planning. Addressing this ideally requires highresolution time series spanning at least a few decades. However, solving capacity expansion planning models across such datasets often requires too much computing time or memory. To reduce computational cost, users often employ time series aggregation to compress demand and weather time series into a smaller number of time steps. Methods are usually a priori, employing information about the input time series only. Recent studies highlight the limitations of this approach, since reducing statistical error metrics on input time series does not in general lead to more accurate model outputs. Furthermore, many aggregation schemes are unsuitable for models with storage since they distort chronology. In this paper, we introduce a posteriori time series aggregation schemes for models with storage. Our methods adapt to the underlying energy system model; aggregation may differ in systems with different technologies or topologies even with the same time series inputs. Furthermore, they preserve chronology and hence allow modelling of storage technologies. We investigate a number of approaches. We find that a posteriori methods can perform better than a priori ones, primarily through a systematic identification and preservation of relevant extreme events. We hope that these tools render long demand and weather time series more manageable in capacity expansion planning studies. We make our models, data, and code publicly available.
Deep Bidirectional LanguageKnowledge Graph Pretraining ; Pretraining a language model LM on text has been shown to help various downstream NLP tasks. Recent works show that a knowledge graph KG can complement text data, offering structured background knowledge that provides a useful scaffold for reasoning. However, these works are not pretrained to learn a deep fusion of the two modalities at scale, limiting the potential to acquire fully joint representations of text and KG. Here we propose DRAGON Deep Bidirectional LanguageKnowledge Graph Pretraining, a selfsupervised approach to pretraining a deeply joint languageknowledge foundation model from text and KG at scale. Specifically, our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities. We pretrain this model by unifying two selfsupervised reasoning tasks, masked language modeling and KG link prediction. DRAGON outperforms existing LM and LMKG models on diverse downstream tasks including question answering across general and biomedical domains, with 5 absolute gain on average. In particular, DRAGON achieves notable performance on complex reasoning about language and knowledge 10 on questions involving long contexts or multistep reasoning and lowresource QA 8 on OBQA and RiddleSense, and new stateoftheart results on various BioNLP tasks. Our code and trained models are available at httpsgithub.commichiyasunagadragon.
A Hybrid System of Sound Event Detection Transformer and Framewise Model for DCASE 2022 Task 4 ; In this paper, we describe in detail our system for DCASE 2022 Task4. The system combines two considerably different models an endtoend Sound Event Detection Transformer SEDT and a framewise model, Metric Learning and Focal Loss CNN MLFLCNN. The former is an eventwise model which learns eventlevel representations and predicts sound event categories and boundaries directly, while the latter is based on the widely adopted frameclassification scheme, under which each frame is classified into event categories and event boundaries are obtained by postprocessing such as thresholding and smoothing. For SEDT, selfsupervised pretraining using unlabeled data is applied, and semisupervised learning is adopted by using an online teacher, which is updated from the student model using the Exponential Moving Average EMA strategy and generates reliable pseudo labels for weaklylabeled and unlabeled data. For the framewise model, the ICTTOSHIBA system of DCASE 2021 Task 4 is used. Experimental results show that the hybrid system considerably outperforms either individual model and achieves psds1 of 0.420 and psds2 of 0.783 on the validation set without external data. The code is available at httpsgithub.com965694547HybridsystemofframewisemodelandSEDT.
Ngram Is Back Residual Learning of Neural Text Generation with ngram Language Model ; Ngram language models LM have been largely superseded by neural LMs as the latter exhibits better performance. However, we find that ngram models can achieve satisfactory performance on a large proportion of testing cases, indicating they have already captured abundant knowledge of the language with relatively low computational cost. With this observation, we propose to learn a neural LM that fits the residual between an ngram LM and the realdata distribution. The combination of ngram and neural LMs not only allows the neural part to focus on the deeper understanding of language but also provides a flexible way to customize an LM by switching the underlying ngram model without changing the neural model. Experimental results on three typical language tasks i.e., language modeling, machine translation, and summarization demonstrate that our approach attains additional performance gains over popular standalone neural models consistently. We also show that our approach allows for effective domain adaptation by simply switching to a domainspecific ngram model, without any extra training. Our code is released at httpsgithub.comghruaNgramRes.
Look to the Right Mitigating Relative Position Bias in Extractive Question Answering ; Extractive question answering QA models tend to exploit spurious correlations to make predictions when a training set has unintended biases. This tendency results in models not being generalizable to examples where the correlations do not hold. Determining the spurious correlations QA models can exploit is crucial in building generalizable QA models in realworld applications; moreover, a method needs to be developed that prevents these models from learning the spurious correlations even when a training set is biased. In this study, we discovered that the relative position of an answer, which is defined as the relative distance from an answer span to the closest questioncontext overlap word, can be exploited by QA models as superficial cues for making predictions. Specifically, we find that when the relative positions in a training set are biased, the performance on examples with relative positions unseen during training is significantly degraded. To mitigate the performance degradation for unseen relative positions, we propose an ensemblebased debiasing method that does not require prior knowledge about the distribution of relative positions. We demonstrate that the proposed method mitigates the models' reliance on relative positions using the biased and full SQuAD dataset. We hope that this study can help enhance the generalization ability of QA models in realworld applications.
Do Pretrained Models Benefit Equally in Continual Learning ; Existing work on continual learning CL is primarily devoted to developing algorithms for models trained from scratch. Despite their encouraging performance on contrived benchmarks, these algorithms show dramatic performance drops in realworld scenarios. Therefore, this paper advocates the systematic introduction of pretraining to CL, which is a general recipe for transferring knowledge to downstream tasks but is substantially missing in the CL community. Our investigation reveals the multifaceted complexity of exploiting pretrained models for CL, along three different axes, pretrained models, CL algorithms, and CL scenarios. Perhaps most intriguingly, improvements in CL algorithms from pretraining are very inconsistent an underperforming algorithm could become competitive and even stateoftheart when all algorithms start from a pretrained model. This indicates that the current paradigm, where all CL methods are compared in fromscratch training, is not well reflective of the true CL objective and desired progress. In addition, we make several other important observations, including that CL algorithms that exert less regularization benefit more from a pretrained model; and that a stronger pretrained model such as CLIP does not guarantee a better improvement. Based on these findings, we introduce a simple yet effective baseline that employs minimum regularization and leverages the more beneficial pretrained model, coupled with a twostage training pipeline. We recommend including this strong baseline in the future development of CL algorithms, due to its demonstrated stateoftheart performance.
An Approach for Noisy, Crowdsourced Datasets Utilizing Ensemble Modeling, Normalized Distributions of Annotations, and Entropic Measures of Uncertainty ; Performing classification on noisy, crowdsourced image datasets can prove challenging even for the best neural networks. Two issues which complicate the problem on such datasets are class imbalance and groundtruth uncertainty in labeling. The ALALL and ALPUB datasets consisting of tightly cropped, individual characters from images of ancient Greek papyri are strongly affected by both issues. The application of ensemble modeling to such datasets can help identify images where the groundtruth is questionable and quantify the trustworthiness of those samples. As such, we apply stacked generalization consisting of nearly identical ResNets with different loss functions one utilizing sparse crossentropy CXE and the other KullbackLiebler Divergence KLD. Both networks use labels drawn from the crowdsourced consensus. For the second network, the KLD is calculated with respect to the proposed Normalized Distribution of Annotations NDA. For our ensemble model, we apply a knearest neighbors model to the outputs of the CXE and KLD networks. Individually, the ResNet models have approximately 93 accuracy, while the ensemble model achieves an accuracy of 95. We also perform an analysis of the Shannon entropy of the various models' output distributions to measure classification uncertainty. Our results suggest that entropy is useful for predicting model misclassifications.
Combined spacetime reducedorder model with 3D deep convolution for extrapolating fluid dynamics ; There is a critical need for efficient and reliable active flow control strategies to reduce drag and noise in aerospace and marine engineering applications. While traditional fullorder models based on the NavierStokes equations are not feasible, advanced model reduction techniques can be inefficient for active control tasks, especially with strong nonlinearity and convectiondominated phenomena. Using convolutional recurrent autoencoder network architectures, deep learningbased reducedorder models have been recently shown to be effective while performing several orders of magnitude faster than fullorder simulations. However, these models encounter significant challenges outside the training data, limiting their effectiveness for active control and optimization tasks. In this study, we aim to improve the extrapolation capability by modifying network architecture and integrating coupled spacetime physics as an implicit bias. Reducedorder models via deep learning generally employ decoupling in spatial and temporal dimensions, which can introduce modeling and approximation errors. To alleviate these errors, we propose a novel technique for learning coupled spatialtemporal correlation using a 3D convolution network. We assess the proposed technique against a standard encoderpropagatordecoder model and demonstrate a superior extrapolation performance. To demonstrate the effectiveness of 3D convolution network, we consider a benchmark problem of the flow past a circular cylinder at laminar flow conditions and use the spatiotemporal snapshots from the fullorder simulations. Our proposed 3D convolution architecture accurately captures the velocity and pressure fields for varying Reynolds numbers. Compared to the standard encoderpropagatordecoder network, the spatiotemporalbased 3D convolution network improves the prediction range of Reynolds numbers outside of the training data.
Reconfigurable Intelligent Surface Power Consumption Modeling and Practical Measurement Validation ; The reconfigurable intelligent surface RIS has received a lot of interest because of its capacity to reconfigure the wireless communication environment in a cost and energyefficient way. However, the realistic power consumption modeling and measurement validation of RIS has received far too little attention. Therefore, in this work, we model the power consumption of RIS and conduct measurement validations using various RISs to fill this vacancy. Firstly, we propose a practical power consumption model of RIS. The RIS hardware is divided into three basic parts the FPGA control board, the drive circuits, and the RIS unit cells. The power consumption of the first two parts is modeled as Ptext static and that of the last part is modeled as Ptext units. Expressions of Ptext static and Ptext units vary amongst different types of RISs. Secondly, we conduct measurements on various RISs to validate the proposed model. Five different RISs including the PIN diode, varactor diode, and RF switch types are measured, and measurement results validate the generality and applicability of the proposed power consumption model of RIS. Finally, we summarize the measurement results and discuss the approaches to achieve the lowpowerconsumption design of RISassisted wireless communication systems.
A Bayesian Framework on Asymmetric Mixture of Factor Analyser ; Mixture of factor analyzer MFA model is an efficient model for the analysis of high dimensional data through which the factoranalyzer technique based on the covariance matrices reducing the number of free parameters. The model also provides an important methodology to determine latent groups in data. There are several pieces of research to extend the model based on the asymmetrical andor with outlier datasets with some known computational limitations that have been examined in frequentist cases. In this paper, an MFA model with a rich and flexible class of skew normal unrestricted generalized hyperbolic called SUNGH distributions along with a Bayesian structure with several computational benefits have been introduced. The SUNGH family provides considerable flexibility to model skewness in different directions as well as allowing for heavy tailed data. There are several desirable properties in the structure of the SUNGH family, including, an analytically flexible density which leads to easing up the computation applied for the estimation of parameters. Considering factor analysis models, the SUNGH family also allows for skewness and heavy tails for both the error component and factor scores. In the present study, the advantages of using this family of distributions have been discussed and the suitable efficiency of the introduced MFA model using real data examples and simulation has been demonstrated.
Review of GamowTeller and Fermi Transition Strength Functions ; We studied the temperature effect in isospinsinglet pairings in GamowTeller excitations. We use theories of a holeparticle in the mean field shell model studied decay transition using the oneparticleonehole model for the betadecay of oddeven isotopes and the twoparticlehole models for the betadecay of eveneven andor oddodd isotopes. Our reference isotopes for the oneparticleonehole model are ce15O, ce15N, ce17F, and ce41Sc, whereas for the twoparticlehole model we use ce16N for betadecay and ce56Ni and ce40Sc for betaEC. The calculations involve evaluating the matrix elements of Gamow Teller and Fermi transitions, then calculate the reduced transition probabilities of GamowTeller and Fermi, from which we evaluate the halflives and the strength function ft. The results are compared with the available experimental data. For oneparticleonehole model we found there is a deviation from experimental values which indicates that the model is not valid for beta decay for the eveneven nuclei in the ground state due to the residual nucleonnucleon interaction. As for a twoparticlehole model, we calculated the transition amplitude, from which we calculated the strength of the transition log ft values. We found an excellent agreement between experimental and theoretical results. By drawing the relationship between temperature versus log ft values, we found the general trend is that the strength function values slowly decrease as temperatures increases. There are fluctuations log ft due to the strongly dependent of log ft on the shell configuration of the valence nucleons.
Structured Mixture of Continuationratio Logits Models for Ordinal Regression ; We develop a nonparametric Bayesian modeling approach to ordinal regression based on priors placed directly on the discrete distribution of the ordinal responses. The prior probability models are built from a structured mixture of multinomial distributions. We leverage a continuationratio logits representation to formulate the mixture kernel, with mixture weights defined through the logit stickbreaking process that incorporates the covariates through a linear function. The implied regression functions for the response probabilities can be expressed as weighted sums of parametric regression functions, with covariatedependent weights. Thus, the modeling approach achieves flexible ordinal regression relationships, avoiding linearity or additivity assumptions in the covariate effects. A key model feature is that the parameters for both the mixture kernel and the mixture weights can be associated with a continuationratio logits regression structure. Hence, an efficient and relatively easy to implement posterior simulation method can be designed, using P'olyaGamma data augmentation. Moreover, the model is built from a conditional independence structure for categoryspecific parameters, which results in additional computational efficiency gains through partial parallel sampling. In addition to the general mixture structure, we study simplified model versions that incorporate covariate dependence only in the mixture kernel parameters or only in the mixture weights. For all proposed models, we discuss approaches to prior specification and develop Markov chain Monte Carlo methods for posterior simulation. The methodology is illustrated with several synthetic and real data examples.
NonStationary LargeScale Statistics of Precipitation Extremes in Central Europe ; Extreme precipitation shows nonstationary behavior over time, but also with respect to other largescale variables. While this effect is often neglected, we propose a model including the influence of North Atlantic Oscillation, time, surface temperature and a blocking index. The model features flexibility to use annual maxima as well as seasonal maxima to be fitted in a generalized extreme value setting. To further increase the efficiency of data usage maxima from different accumulation durations are aggregated so that information for extremes on different time scales can be provided. Our model is trained to individual station data with temporal resolutions ranging from one minute to one day across Germany. The models are selected with a stepwise BIC model selection and verified with a crossvalidated quantile skill index. The verification shows that the new model performs better than a reference model without large scale information. Also, the new model enables insights into the effect of large scale variables on extreme precipitation. Results suggest that the probability of extreme precipitation increases with time since 1950 in all seasons. High probabilities of extremes are positively correlated with blocking situations in summer and with temperature in winter. However, they are negatively correlated with blocking situations in winter and temperature in summer.
Reduced Order Probabilistic Emulation for PhysicsBased Thermosphere Models ; The geospace environment is volatile and highly driven. Space weather has effects on Earth's magnetosphere that cause a dynamic and enigmatic response in the thermosphere, particularly on the evolution of neutral mass density. Many models exist that use space weather drivers to produce a density response, but these models are typically computationally expensive or inaccurate for certain space weather conditions. In response, this work aims to employ a probabilistic machine learning ML method to create an efficient surrogate for the Thermosphere Ionosphere Electrodynamics General Circulation Model TIEGCM, a physicsbased thermosphere model. Our method leverages principal component analysis to reduce the dimensionality of TIEGCM and recurrent neural networks to model the dynamic behavior of the thermosphere much quicker than the numerical model. The newly developed reduced order probabilistic emulator ROPE uses LongShort Term Memory neural networks to perform timeseries forecasting in the reduced state and provide distributions for future density. We show that across the available data, TIEGCM ROPE has similar error to previous linear approaches while improving stormtime modeling. We also conduct a satellite propagation study for the significant November 2003 storm which shows that TIEGCM ROPE can capture the position resulting from TIEGCM density with 5 km bias. Simultaneously, linear approaches provide point estimates that can result in biases of 7 18 km.
Flexible Basis Representations for Modeling HighDimensional Hierarchical Spatial Data ; Nonstationary and nonGaussian spatial data are prevalent across many fields e.g., counts of animal species, disease incidences in susceptible regions, and remotelysensed satellite imagery. Due to modern data collection methods, the size of these datasets have grown considerably. Spatial generalized linear mixed models SGLMMs are a flexible class of models used to model nonstationary and nonGaussian datasets. Despite their utility, SGLMMs can be computationally prohibitive for even moderately large datasets. To circumvent this issue, past studies have embedded nested radial basis functions into the SGLMM. However, two crucial specifications knot placement and bandwidth parameters, which directly affect model performance, are typically fixed prior to modelfitting. We propose a novel approach to model large nonstationary and nonGaussian spatial datasets using adaptive radial basis functions. Our approach 1 partitions the spatial domain into subregions; 2 employs reversiblejump Markov chain Monte Carlo RJMCMC to infer the number and location of the knots within each partition; and 3 models the latent spatial surface using partitionvarying and adaptive basis functions. Through an extensive simulation study, we show that our approach provides more accurate predictions than competing methods while preserving computational efficiency. We demonstrate our approach on two environmental datasets incidences of plant species and counts of bird species in the United States.
Listen, Denoise, Action AudioDriven Motion Synthesis with Diffusion Models ; Diffusion models have experienced a surge of interest as highly expressive yet efficiently trainable probabilistic models. We show that these models are an excellent fit for synthesising human motion that cooccurs with audio, e.g., dancing and cospeech gesticulation, since motion is complex and highly ambiguous given audio, calling for a probabilistic description. Specifically, we adapt the DiffWave architecture to model 3D pose sequences, putting Conformers in place of dilated convolutions for improved modelling power. We also demonstrate control over motion style, using classifierfree guidance to adjust the strength of the stylistic expression. Experiments on gesture and dance generation confirm that the proposed method achieves topoftheline motion quality, with distinctive styles whose expression can be made more or less pronounced. We also synthesise pathdriven locomotion using the same model architecture. Finally, we generalise the guidance procedure to obtain productofexpert ensembles of diffusion models and demonstrate how these may be used for, e.g., style interpolation, a contribution we believe is of independent interest. See httpswww.speech.kth.seresearchlistendenoiseaction for video examples, data, and code.
Bayesian Hierarchical Models For Multitype Survey Data Using Spatially Correlated Covariates Measured With Error ; We introduce Bayesian hierarchical models for predicting highdimensional tabular survey data which can be distributed from one or multiple classes of distributions e.g., Gaussian, Poisson, Binomial, etc.. We adopt a Bayesian implementation of a Hierarchical Generalized Transformation HGT model to deal with the nonconjugacy of nonGaussian data models when estimated using a Latent Gaussian Process LGP model. Survey data are usually prone to a high degree of sampling error, and we use covariates that are prone to measurement error as well as those free of any such error. A classical measurement error component is defined to deal with the sampling error in the covariates. The proposed models can be highdimensional and we employ the notion of basis function expansions to provide an effective approach to dimension reduction. The HGT component lends flexibility to our model to incorporate multitype response datasets under a unified latent process model framework. To demonstrate the applicability of our methodology, we provide the results from simulation studies and data applications arising from a dataset consisting of the U.S. Census Bureau's American Community Survey ACS 5year period estimates of the total population count under the poverty threshold and the ACS 5year period estimates of median housing costs at the county level across multiple states in the USA.
VeriCompress A Tool to Streamline the Synthesis of Verified Robust Compressed Neural Networks from Scratch ; AI's widespread integration has led to neural networks NNs deployment on edge and similar limitedresource platforms for safetycritical scenarios. Yet, NN's fragility raises concerns about reliable inference. Moreover, constrained platforms demand compact networks. This study introduces VeriCompress, a tool that automates the search and training of compressed models with robustness guarantees. These models are wellsuited for safetycritical applications and adhere to predefined architecture and size limitations, making them deployable on resourcerestricted platforms. The method trains models 23 times faster than the stateoftheart approaches, surpassing relevant baseline approaches by average accuracy and robustness gains of 15.1 and 9.8 percentage points, respectively. When deployed on a resourcerestricted generic platform, these models require 58 times less memory and 24 times less inference time than models used in verified robustness literature. Our comprehensive evaluation across various model architectures and datasets, including MNIST, CIFAR, SVHN, and a relevant pedestrian detection dataset, showcases VeriCompress's capacity to identify compressed verified robust models with reduced computation overhead compared to current standards. This underscores its potential as a valuable tool for end users, such as developers of safetycritical applications on edge or Internet of Things platforms, empowering them to create suitable models for safetycritical, resourceconstrained platforms in their respective domains.
Melding Wildlife Surveys to Improve Conservation Inference ; Integrated models are a popular tool for analyzing species of conservation concern. Species of conservation concern are often monitored by multiple entities that generate several datasets. Individually, these datasets may be insufficient for guiding management due to low spatiotemporal resolution, biased sampling, or large observational uncertainty. Integrated models provide an approach for assimilating multiple datasets in a coherent framework that can compensate for these deficiencies. While conventional integrated models have been used to assimilate count data with surveys of survival, fecundity, and harvest, they can also assimilate ecological surveys that have differing spatiotemporal regions and observational uncertainties. Motivated by independent aerial and ground surveys of lesser prairiechicken, we developed an integrated modeling approach that assimilates density estimates derived from surveys with distinct sources of observational error into a joint framework that provides shared inference on spatiotemporal trends. We model these data using a Bayesian Markov melding approach and apply several data augmentation strategies for efficient sampling. In a simulation study, we show that our integrated model improved predictive performance relative to models that analyzed the surveys independently. We use the integrated model to facilitate prediction of lesser prairiechicken density at unsampled regions and perform a sensitivity analysis to quantify the inferential cost associated with reduced survey effort.
WordLevel Representation From Bytes For Language Modeling ; Modern language models mostly take subwords as input, a design that balances the tradeoff between vocabulary size, number of parameters, and performance. However, subword tokenization still has disadvantages like not being robust to noise and difficult to generalize to new languages. Also, the current trend of scaling up models reveals that larger models require larger embeddings but that makes parallelization hard. Previous work on image classification proves splitting raw input into a sequence of chucks is a strong, modelagnostic inductive bias. Based on this observation, we rethink the existing characteraware method that takes characterlevel inputs but makes wordlevel sequence modeling and prediction. We overhaul this method by introducing a crossattention network that builds wordlevel representation directly from bytes, and a subword level prediction based on wordlevel hidden states to avoid the time and space requirement of wordlevel prediction. With these two improvements combined, we have a token free model with slim input embeddings for downstream tasks. We name our method Byte2Word and perform evaluations on language modeling and text classification. Experiments show that Byte2Word is on par with the strong subword baseline BERT but only takes up 10 of embedding size. We further test our method on synthetic noise and crosslingual transfer and find it competitive to baseline methods on both settings.
DGEKT A Dual Graph Ensemble Learning Method for Knowledge Tracing ; Knowledge tracing aims to trace students' evolving knowledge states by predicting their future performance on conceptrelated exercises. Recently, some graphbased models have been developed to incorporate the relationships between exercises to improve knowledge tracing, but only a single type of relationship information is generally explored. In this paper, we present a novel Dual Graph Ensemble learning method for Knowledge Tracing DGEKT, which establishes a dual graph structure of students' learning interactions to capture the heterogeneous exerciseconcept associations and interaction transitions by hypergraph modeling and directed graph modeling, respectively. To ensemble the dual graph models, we introduce the technique of online knowledge distillation, due to the fact that although the knowledge tracing model is expected to predict students' responses to the exercises related to different concepts, it is optimized merely with respect to the prediction accuracy on a single exercise at each step. With online knowledge distillation, the dual graph models are adaptively combined to form a stronger teacher model, which in turn provides its predictions on all exercises as extra supervision for better modeling ability. In the experiments, we compare DGEKT against eight knowledge tracing baselines on three benchmark datasets, and the results demonstrate that DGEKT achieves stateoftheart performance.
Reinforcement Learning Agent Design and Optimization with Bandwidth Allocation Model ; Reinforcement learning RL is currently used in various reallife applications. RLbased solutions have the potential to generically address problems, including the ones that are difficult to solve with heuristics and metaheuristics and, in addition, the set of problems and issues where some intelligent or cognitive approach is required. However, reinforcement learning agents require a not straightforward design and have important design issues. RL agent design issues include the target problem modeling, statespace explosion, the training process, and agent efficiency. Research currently addresses these issues aiming to foster RL dissemination. A BAM model, in summary, allocates and shares resources with users. There are three basic BAM models and several hybrids that differ in how they allocate and share resources among users. This paper addresses the issue of an RL agent design and efficiency. The RL agent's objective is to allocate and share resources among users. The paper investigates how a BAM model can contribute to the RL agent design and efficiency. The AllocTCSharing ATCS model is analytically described and simulated to evaluate how it mimics the RL agent operation and how the ATCS can offload computational tasks from the RL agent. The essential argument researched is whether algorithms integrated with the RL agent design and operation have the potential to facilitate agent design and optimize its execution. The ATCS analytical model and simulation presented demonstrate that a BAM model offloads agent tasks and assists the agent's design and optimization.
Interpretability Analysis of Deep Models for COVID19 Detection ; During the outbreak of COVID19 pandemic, several research areas joined efforts to mitigate the damages caused by SARSCoV2. In this paper we present an interpretability analysis of a convolutional neural network based model for COVID19 detection in audios. We investigate which features are important for model decision process, investigating spectrograms, F0, F0 standard deviation, sex and age. Following, we analyse model decisions by generating heat maps for the trained models to capture their attention during the decision process. Focusing on a explainable Inteligence Artificial approach, we show that studied models can taken unbiased decisions even in the presence of spurious data in the training set, given the adequate preprocessing steps. Our best model has 94.44 of accuracy in detection, with results indicating that models favors spectrograms for the decision process, particularly, high energy areas in the spectrogram related to prosodic domains, while F0 also leads to efficient COVID19 detection.
Easy Begun is Half Done SpatialTemporal Graph Modeling with STCurriculum Dropout ; Spatialtemporal ST graph modeling, such as traffic speed forecasting and taxi demand prediction, is an important task in deep learning area. However, for the nodes in graph, their ST patterns can vary greatly in difficulties for modeling, owning to the heterogeneous nature of ST data. We argue that unveiling the nodes to the model in a meaningful order, from easy to complex, can provide performance improvements over traditional training procedure. The idea has its root in Curriculum Learning which suggests in the early stage of training models can be sensitive to noise and difficult samples. In this paper, we propose STCurriculum Dropout, a novel and easytoimplement strategy for spatialtemporal graph modeling. Specifically, we evaluate the learning difficulty of each node in highlevel feature space and drop those difficult ones out to ensure the model only needs to handle fundamental ST relations at the beginning, before gradually moving to hard ones. Our strategy can be applied to any canonical deep learning architecture without extra trainable parameters, and extensive experiments on a wide range of datasets are conducted to illustrate that, by controlling the difficulty level of ST relations as the training progresses, the model is able to capture better representation of the data and thus yields better generalization.
Understanding and Enhancing Robustness of Conceptbased Models ; Rising usage of deep neural networks to perform decision making in critical applications like medical diagnosis and financial analysis have raised concerns regarding their reliability and trustworthiness. As automated systems become more mainstream, it is important their decisions be transparent, reliable and understandable by humans for better trust and confidence. To this effect, conceptbased models such as Concept Bottleneck Models CBMs and SelfExplaining Neural Networks SENN have been proposed which constrain the latent space of a model to represent high level concepts easily understood by domain experts in the field. Although conceptbased models promise a good approach to both increasing explainability and reliability, it is yet to be shown if they demonstrate robustness and output consistent concepts under systematic perturbations to their inputs. To better understand performance of conceptbased models on curated malicious samples, in this paper, we aim to study their robustness to adversarial perturbations, which are also known as the imperceptible changes to the input data that are crafted by an attacker to fool a welllearned conceptbased model. Specifically, we first propose and analyze different malicious attacks to evaluate the security vulnerability of concept based models. Subsequently, we propose a potential general adversarial trainingbased defense mechanism to increase robustness of these systems to the proposed malicious attacks. Extensive experiments on one synthetic and two realworld datasets demonstrate the effectiveness of the proposed attacks and the defense approach.
Exploring Stochastic Autoregressive Image Modeling for Visual Representation ; Autoregressive language modeling ALM have been successfully used in selfsupervised pretraining in Natural language processing NLP. However, this paradigm has not achieved comparable results with other selfsupervised approach in computer vision e.g., contrastive learning, mask image modeling. In this paper, we try to find the reason why autoregressive modeling does not work well on vision tasks. To tackle this problem, we fully analyze the limitation of visual autoregressive methods and proposed a novel stochastic autoregressive image modeling named SAIM by the two simple designs. First, we employ stochastic permutation strategy to generate effective and robust image context which is critical for vision tasks. Second, we create a parallel encoderdecoder training process in which the encoder serves a similar role to the standard vision transformer focus on learning the whole contextual information, and meanwhile the decoder predicts the content of the current position, so that the encoder and decoder can reinforce each other. By introducing stochastic prediction and the parallel encoderdecoder, SAIM significantly improve the performance of autoregressive image modeling. Our method achieves the best accuracy 83.9 on the vanilla ViTBase model among methods using only ImageNet1K data. Transfer performance in downstream tasks also show that our model achieves competitive performance.
Images Speak in Images A Generalist Painter for InContext Visual Learning ; Incontext learning, as a new paradigm in NLP, allows the model to rapidly adapt to various tasks with only a handful of prompts and examples. But in computer vision, the difficulties for incontext learning lie in that tasks vary significantly in the output representations, thus it is unclear how to define the generalpurpose task prompts that the vision model can understand and transfer to outofdomain tasks. In this work, we present Painter, a generalist model which addresses these obstacles with an imagecentric solution, that is, to redefine the output of core vision tasks as images, and specify task prompts as also images. With this idea, our training process is extremely simple, which performs standard masked image modeling on the stitch of input and output image pairs. This makes the model capable of performing tasks conditioned on visible image patches. Thus, during inference, we can adopt a pair of input and output images from the same task as the input condition, to indicate which task to perform. Without bells and whistles, our generalist Painter can achieve competitive performance compared to wellestablished taskspecific models, on seven representative vision tasks ranging from highlevel visual understanding to lowlevel image processing. In addition, Painter significantly outperforms recent generalist models on several challenging tasks.
Mitigation of Spatial Nonstationarity with Vision Transformers ; Spatial nonstationarity, the location variance of features' statistical distributions, is ubiquitous in many natural settings. For example, in geological reservoirs rock matrix porosity varies vertically due to geomechanical compaction trends, in mineral deposits grades vary due to sedimentation and concentration processes, in hydrology rainfall varies due to the atmosphere and topography interactions, and in metallurgy crystalline structures vary due to differential cooling. Conventional geostatistical modeling workflows rely on the assumption of stationarity to be able to model spatial features for the geostatistical inference. Nevertheless, this is often not a realistic assumption when dealing with nonstationary spatial data and this has motivated a variety of nonstationary spatial modeling workflows such as trend and residual decomposition, cosimulation with secondary features, and spatial segmentation and independent modeling over stationary subdomains. The advent of deep learning technologies has enabled new workflows for modeling spatial relationships. However, there is a paucity of demonstrated best practice and general guidance on mitigation of spatial nonstationarity with deep learning in the geospatial context. We demonstrate the impact of two common types of geostatistical spatial nonstationarity on deep learning model prediction performance and propose the mitigation of such impacts using selfattention vision transformer models. We demonstrate the utility of vision transformers for the mitigation of nonstationarity with relative errors as low as 10, exceeding the performance of alternative deep learning methods such as convolutional neural networks. We establish best practice by demonstrating the ability of selfattention networks for modeling largescale spatial relationships in the presence of commonly observed geospatial nonstationarity.
Nonparametric estimation of mixed discrete choice models ; In this paper, different strands of literature are combined in order to obtain algorithms for semiparametric estimation of discrete choice models that include the modelling of unobserved heterogeneity by using mixing distributions for the parameters defining the preferences. The models use the theory on nonparametric maximum likelihood estimation NPMLE that has been developed for general mixing models. The expectationmaximization EM techniques used in the NPMLE literature are combined with strategies for choosing appropriate approximating models using adaptive grid techniques. Jointly this leads to techniques for specification and estimation that can be used to obtain a consistent specification of the mixing distribution. Additionally, also algorithms for the estimation are developed that help to decrease problems due to the curse of dimensionality. The proposed algorithms are demonstrated in a small scale simulation study to be useful for the specification and estimation of mixture models in the discrete choice context providing some information on the specification of the mixing distribution. The simulations document that some aspects of the mixing distribution such as the expectation can be estimated reliably. They also demonstrate, however, that typically different approximations to the mixing distribution lead to similar values of the likelihood and hence are hard to discriminate. Therefore it does not appear to be possible to reliably infer the most appropriate parametric form for the estimated mixing distribution.
Accuracy and precision of triaxial orbit models I SMBH mass, stellar mass and darkmatter halo ; We investigate the accuracy and precision of triaxial dynamical orbit models by fitting two dimensional mock observations of a realistic Nbody merger simulation resembling a massive earlytype galaxy with a supermassive black hole SMBH. We show that we can reproduce the triaxial Nbody merger remnant's correct black hole mass, stellar masstolight ratio and total enclosed mass inside the halflight radius for several different tested orientations with an unprecedented accuracy of 510. Our dynamical models use the entire nonparametric lineofsight velocity distribution LOSVD rather than parametric LOSVDs or velocity moments as constraints. Our results strongly suggest that stateoftheart integralfield projected kinematic data contain only minor degeneracies with respect to the mass and anisotropy recovery. Moroever, this also demonstrates the strength of the Schwarzschild method in general. We achieve the proven high recovery accuracy and precision with our newly developed modeling machinery by combining several advancements i our new semiparametric deprojection code probes degeneracies and allows to constrain the viewing angles of a triaxial galaxy; ii our new orbit modeling code SMART uses a 5dim orbital starting space to representatively sample in particular nearKeplerian orbits in galaxy centers; iii we use a generalised information criterion AICp to optimise the smoothing and to compare different mass models to avoid biases that occur in chi2based models with varying model flexibilities.
Establishing a stronger baseline for lightweight contrastive models ; Recent research has reported a performance degradation in selfsupervised contrastive learning for specially designed efficient networks, such as MobileNet and EfficientNet. A common practice to address this problem is to introduce a pretrained contrastive teacher model and train the lightweight networks with distillation signals generated by the teacher. However, it is time and resource consuming to pretrain a teacher model when it is not available. In this work, we aim to establish a stronger baseline for lightweight contrastive models without using a pretrained teacher model. Specifically, we show that the optimal recipe for efficient models is different from that of larger models, and using the same training settings as ResNet50, as previous research does, is inappropriate. Additionally, we observe a common issu e in contrastive learning where either the positive or negative views can be noisy, and propose a smoothed version of InfoNCE loss to alleviate this problem. As a result, we successfully improve the linear evaluation results from 36.3 to 62.3 for MobileNetV3Large and from 42.2 to 65.8 for EfficientNetB0 on ImageNet, closing the accuracy gap to ResNet50 with 5times fewer parameters. We hope our research will facilitate the usage of lightweight contrastive models.
Simulating Road Spray Effects in Automotive Lidar Sensor Models ; Modeling perception sensors is key for simulation based testing of automated driving functions. Beyond weather conditions themselves, sensors are also subjected to object dependent environmental influences like tire spray caused by vehicles moving on wet pavement. In this work, a novel modeling approach for spray in lidar data is introduced. The model conforms to the Open Simulation Interface OSI standard and is based on the formation of detection clusters within a spray plume. The detections are rendered with a simple custom ray casting algorithm without the need of a fluid dynamics simulation or physics engine. The model is subsequently used to generate training data for object detection algorithms. It is shown that the model helps to improve detection in realworld spray scenarios significantly. Furthermore, a systematic realworld data set is recorded and published for analysis, model calibration and validation of spray effects in active perception sensors. Experiments are conducted on a test track by driving over artificially watered pavement with varying vehicle speeds, vehicle types and levels of pavement wetness. All models and data of this work are available open source.
Kaleidoscopes of Hofstadter Butterflies and AharonovBohm caging from 2nroot topology in decorated square lattices ; Squareroot topology describes models whose topological properties can be revealed upon squaring the Hamiltonian, which produces their respective parent topological insulators. This concept has recently been generalized to 2nroot topology, characterizing models where n squaring operations must be applied to the Hamiltonian in order to arrive at the topological source of the model. In this paper, we analyze the Hofstadter regime of quasionedimensional quasi1D and twodimensional 2D 2nroot models, the latter of which has the square lattice SL known for the Hofstadter Butterfly as the source model. We show that upon increasing the rootdegree of the model, there appear multiple magnetic flux insensitive flat bands, and analytically determine corresponding eigenstates. These can be recast as compact localized states CLSs occupying a finite region of the lattice. For a finite flux, these CLSs correspond to different harmonics contained within the same AharonovBohm AB cage. Furthermore, as the rootdegree increases, a kaleidoscope of butterflies is seen to appear in the Hofstadter diagram, with each butterfly constituting a topologically equivalent replica of the original one of the SL. As such, the index n, which uniquely identifies the rootdegree of the model, can be seen as an additional fractal dimension of the 2nroot model present in its Hofstadter diagram. We discuss how these dynamics could be realized in experiments with ultracold atoms, and measured by Bragg spectroscopy or through observing dynamics of initially localized atoms in a quantum gas microscope.
DeepGlow an efficient neuralnetwork emulator of physical afterglow models for gammaray bursts and gravitationalwave events ; Gammaray bursts GRBs and double neutronstar merger gravitational wave events are followed by afterglows that shine from Xrays to radio, and these broadband transients are generally interpreted using analytical models. Such models are relatively fast to execute, and thus easily allow estimates of the energy and geometry parameters of the blast wave, through many trialanderror model calculations. One problem, however, is that such analytical models do not capture the underlying physical processes as well as more realistic relativistic numerical hydrodynamic RHD simulations do. Ideally, those simulations are used for parameter estimation instead, but their computational cost makes this intractable. To this end, we present DeepGlow, a highly efficient neural network architecture trained to emulate a computationally costly RHDbased model of GRB afterglows, to within a few percent accuracy. As a first scientific application, we compare both the emulator and a different analytical model calibrated to RHD simulations, to estimate the parameters of a broadband GRB afterglow. We find consistent results between these two models, and also give further evidence for a stellar wind progenitor environment around this GRB source. DeepGlow fuses simulations that are otherwise too complex to execute over all parameters, to real broadband data of current and future GRB afterglows.
Modeling TimeSeries and Spatial Data for Recommendations and Other Applications ; With the research directions described in this thesis, we seek to address the critical challenges in designing recommender systems that can understand the dynamics of continuoustime event sequences. We follow a groundup approach, i.e., first, we address the problems that may arise due to the poor quality of CTES data being fed into a recommender system. Later, we handle the task of designing accurate recommender systems. To improve the quality of the CTES data, we address a fundamental problem of overcoming missing events in temporal sequences. Moreover, to provide accurate sequence modeling frameworks, we design solutions for pointsofinterest recommendation, i.e., models that can handle spatial mobility data of users to various POI checkins and recommend candidate locations for the next checkin. Lastly, we highlight that the capabilities of the proposed models can have applications beyond recommender systems, and we extend their abilities to design solutions for largescale CTES retrieval and human activity prediction. A significant part of this thesis uses the idea of modeling the underlying distribution of CTES via neural marked temporal point processes MTPP. Traditional MTPP models are stochastic processes that utilize a fixed formulation to capture the generative mechanism of a sequence of discrete events localized in continuous time. In contrast, neural MTPP combine the underlying ideas from the point process literature with modern deep learning architectures. The ability of deeplearning models as accurate function approximators has led to a significant gain in the predictive prowess of neural MTPP models. In this thesis, we utilize and present several neural networkbased enhancements for the current MTPP frameworks for the aforementioned realworld applications.
Choosing statistical models to assess biological interaction as a departure from additivity of effects ; Vanderweele and Knol define biological interaction as an instance wherein two exposures physically interact to bring about the outcome. A hallmark of biological interaction is that the total effect, produced when factors act together, differs from the sum of effects when the factors operate independently. Epidemiologists construct statistical models to assess biological interaction. The form of the statistical model determines whether it is suited to detecting departures from additivity of effects or for detecting departures from multiplicativity of effects. A consensus exists that biological interaction should be assessed as a departure from additivity of effects. This paper compares three statistical models' assessment of a data example that appears in several epidemiology textbooks to illustrate biological interaction in a binomial outcome. A linear binomial model quantifies departure from additivity in the data example in terms of differences in probabilities. It generates directly interpretable estimates and 95 confidence intervals for parameters including the interaction contrast IC. Log binomial and logistic regression models detect no departure from multiplicativity in the data example. However, their estimates contribute to calculation of a Relative Excess Risk Due to Interaction RERI, a measure of departure from additivity on a relative risk scale. The linear binomial model directly produces interpretable assessments of departures from additivity of effects and deserves wider use in research and in the teaching of epidemiology. Strategies exist to address the model's limitations.
A Model for Gradual Phase Heating Driven by MHD Turbulence in Solar Flares ; Coronal flare emission is commonly observed to decay on timescales longer than those predicted by impulsivelydriven, onedimensional flare loop models. This discrepancy is most apparent during the gradual phase, where emission from these models decays over minutes, in contrast to the hour or more often observed. Magnetic reconnection is invoked as the energy source of a flare, but should deposit energy into a given loop within a matter of seconds. Models which supplement this impulsive energization with a long, persistent ad hoc heating have successfully reproduced longduration emission, but without providing a clear physical justification. Here we propose a model for extended flare heating by the slow dissipation of turbulent Alfv'en waves initiated during the retraction of newlyreconnected flux tubes through a current sheet. Using onedimensional simulations, we track the production and evolution of MHD wave turbulence trapped by reflection from highdensity gradients in the transition region. Turbulent energy dissipates through nonlinear interaction between counterpropagating waves, modeled here using a phenomenological onepoint closure model. AIA EUV light curves synthesized from the simulation were able to reproduce emission decay on the order of tens of minutes. We find this simple model offers a possible mechanism for generating the extended heating demanded by observed coronal flare emissions selfconsistently from reconnectionpowered flare energy release.
Choosing observation operators to mitigate model error in Bayesian inverse problems ; In statistical inference, a discrepancy between the parametertoobservable map that generates the data and the parametertoobservable map that is used for inference can lead to misspecified likelihoods and thus to incorrect estimates. In many inverse problems, the parametertoobservable map is the composition of a linear statetoobservable map called an observation operator' and a possibly nonlinear parametertostate map called the model'. We consider such Bayesian inverse problems where the discrepancy in the parametertoobservable map is due to the use of an approximate model that differs from the best model, i.e. to nonzero model error'. Multiple approaches have been proposed to address such discrepancies, each leading to a specific posterior. We show how to use local Lipschitz stability estimates of posteriors with respect to likelihood perturbations to bound the KullbackLeibler divergence of the posterior of each approach with respect to the posterior associated to the best model. Our bounds lead to criteria for choosing observation operators that mitigate the effect of model error for Bayesian inverse problems of this type. We illustrate the feasibility of one such criterion on an advectiondiffusionreaction PDE inverse problem, and use this example to discuss the importance and challenges of model erroraware inference.
Multifidelity surrogate modeling for temperature field prediction using deep convolution neural network ; Temperature field prediction is of great importance in the thermal design of systems engineering, and building the surrogate model is an effective way for the task. Generally, large amounts of labeled data are required to guarantee a good prediction performance of the surrogate model, especially the deep learning model, which have more parameters and better representational ability. However, labeled data, especially highfidelity labeled data, are usually expensive to obtain and sometimes even impossible. To solve this problem, this paper proposes a pithy deep multifidelity model DMFM for temperature field prediction, which takes advantage of lowfidelity data to boost the performance with less highfidelity data. First, a pretrain and finetune paradigm are developed in DMFM to train the lowfidelity and highfidelity data, which significantly reduces the complexity of the deep surrogate model. Then, a selfsupervised learning method for training the physicsdriven deep multifidelity model PDDMFM is proposed, which fully utilizes the physics characteristics of the engineering systems and reduces the dependence on large amounts of labeled lowfidelity data in the training process. Two diverse temperature field prediction problems are constructed to validate the effectiveness of DMFM and PDDMFM, and the result shows that the proposed method can greatly reduce the dependence of the model on highfidelity data.
Opportunities and Challenges in Neural Dialog Tutoring ; Designing dialog tutors has been challenging as it involves modeling the diverse and complex pedagogical strategies employed by human tutors. Although there have been significant recent advances in neural conversational systems using large language models LLMs and growth in available dialog corpora, dialog tutoring has largely remained unaffected by these advances. In this paper, we rigorously analyze various generative language models on two dialog tutoring datasets for language learning using automatic and human evaluations to understand the new opportunities brought by these advances as well as the challenges we must overcome to build models that would be usable in real educational settings. We find that although current approaches can model tutoring in constrained learning scenarios when the number of concepts to be taught and possible teacher strategies are small, they perform poorly in less constrained scenarios. Our human quality evaluation shows that both models and groundtruth annotations exhibit low performance in terms of equitable tutoring, which measures learning opportunities for students and how engaging the dialog is. To understand the behavior of our models in a real tutoring setting, we conduct a user study using expert annotators and find a significantly large number of model reasoning errors in 45 of conversations. Finally, we connect our findings to outline future work.
Oncology clinical trial design planning based on a multistate model that jointly models progressionfree and overall survival endpoints ; When planning an oncology clinical trial, the usual approach is to assume an exponential distribution for the timetoevent endpoints. Often, besides the goldstandard endpoint overall survival, progressionfree survival is considered as a second confirmatory endpoint. We use a survival multistate model to jointly model these two endpoints and find that neither exponential distribution nor proportional hazards will typically hold for both endpoints simultaneously. The multistate model approach allows us to consider the joint distribution of the two endpoints and to derive quantities of interest as the correlation between overall survival and progressionfree survival. In this paper, we use the multistate model framework to simulate clinical trials with endpoints OS and PFS and show how design planning questions can be answered using this approach. In addition to the major advantage that we can model nonproportional hazards quite naturally with this approach, the correlation between the two endpoints can be exploited to determine sample size and typeIerror. We consider an oncology trial on nonsmallcell lung cancer as a motivating example from which we derive relevant trial design questions. We then illustrate how clinical trial design can be based on simulations from a multistate model. Key applications are coprimary endpoints and groupsequential designs. Simulations for these applications show that the standard simplifying approach often leads to underpowered or overpowered clinical trials. Our approach is quite general and can be extended to more complex trial designs, further endpoints, and other therapeutic areas.
Reconstruction of threedimensional turbulent flow structures using surface measurements for freesurface flows based on a convolutional neural network ; A model based on a convolutional neural network CNN is designed to reconstruct the threedimensional turbulent flows beneath a free surface using surface measurements, including the surface elevation and surface velocity. Trained on datasets obtained from the direct numerical simulation DNS of turbulent openchannel flows with a deformable free surface, the proposed model can accurately reconstruct the nearsurface flow field and capture the characteristic largescale flow structures away from the surface. The reconstruction performance of the model, measured by metrics such as the normalised mean squared reconstruction errors and scalespecific errors, is considerably better than that of the traditional linear stochastic estimation LSE method. We further analyse the saliency maps of the CNN model and the kernels of the LSE model and obtain insights into how the two models utilise surface features to reconstruct subsurface flows. The importance of different surface variables is analysed based on the saliency map of the CNN, which reveals knowledge about the surfacesubsurface relations. The CNN is also shown to have a good generalization capability with respect to the Froude number if a model trained for a flow with a high Froude number is applied to predict flows with lower Froude numbers. The results presented in this work indicate that the CNN is effective regarding the detection of subsurface flow structures and by interpreting the surfacesubsurface relations underlying the reconstruction model, the CNN can be a promising tool for assisting with the physical understanding of freesurface turbulence.
Improved analytical modeling of the nonlinear power spectrum in modified gravity cosmologies ; Reliable analytical modeling of the nonlinear power spectrum PS of matter perturbations is among the chief prerequisites for cosmological analyses from the largest sky surveys. This is especially true for the models that extend the standard generalrelativity paradigm by adding the fifth force, where numerical simulations can be prohibitively expensive. Here we present a method for building accurate PS models for two modified gravity MG variants namely the HuSawicki fR, and the normal branch of the DvaliGabadadzePorrati nDGP braneworld. We start by modifying the standard halo model HM with respect to the baseline LambdaColdDarkMatter LambdaCDM scenario, by using the HM components with specific MG extensions. We find that our PktextHM retains 5 accuracy only up to mildly nonlinear scales k lesssim 0.3 h,mboxMpc when compared to PS from numerical simulations. At the same time, our HM prescription much more accurately captures the ratio Upsilonk PktextMGPkLambda textCDM up to nonlinear scales. We show that using HMderived Upsilonk together with a viable nonlinear LambdaCDM Pk prescription such as HALOFIT, we render a much better and more accurate PS predictions in MG. The new approach yields considerably improved performance, with modeled PktextMG being now accurate to within 5 all the way to nonlinear scales of k lesssim 2.53 h,mboxMpc. The magnitude of deviations from GR as fostered by these MG models is typically mathcalO10 in these regimes. Therefore reaching 5 PS modeling is enough for forecasting constraints on modernera cosmological observables.
Efficient Scopeformer Towards Scalable and Rich Feature Extraction for Intracranial Hemorrhage Detection ; The quality and richness of feature maps extracted by convolution neural networks CNNs and vision Transformers ViTs directly relate to the robust model performance. In medical computer vision, these informationrich features are crucial for detecting rare cases within large datasets. This work presents the Scopeformer, a novel multiCNNViT model for intracranial hemorrhage classification in computed tomography CT images. The Scopeformer architecture is scalable and modular, which allows utilizing various CNN architectures as the backbone with diversified output features and pretraining strategies. We propose effective feature projection methods to reduce redundancies among CNNgenerated features and to control the input size of ViTs. Extensive experiments with various Scopeformer models show that the model performance is proportional to the number of convolutional blocks employed in the feature extractor. Using multiple strategies, including diversifying the pretraining paradigms for CNNs, different pretraining datasets, and style transfer techniques, we demonstrate an overall improvement in the model performance at various computational budgets. Later, we propose smaller computeefficient Scopeformer versions with three different types of input and output ViT configurations. Efficient Scopeformers use four different pretrained CNN architectures as feature extractors to increase feature richness. Our best Efficient Scopeformer model achieved an accuracy of 96.94 and a weighted logarithmic loss of 0.083 with an eight times reduction in the number of trainable parameters compared to the base Scopeformer. Another version of the Efficient Scopeformer model further reduced the parameter space by almost 17 times with negligible performance reduction. Hybrid CNNs and ViTs might provide the desired feature richness for developing accurate medical computer vision models
GRANDE a neural model over directed multigraphs with application to antimoney laundering ; The application of graph representation learning techniques to the area of financial risk management FRM has attracted significant attention recently. However, directly modeling transaction networks using graph neural models remains challenging Firstly, transaction networks are directed multigraphs by nature, which could not be properly handled with most of the current offtheshelf graph neural networks GNN. Secondly, a crucial problem in FRM scenarios like antimoney laundering AML is to identify risky transactions and is most naturally cast into an edge classification problem with rich edgelevel features, which are not fully exploited by the prevailing GNN design that follows nodecentric message passing protocols. In this paper, we present a systematic investigation of design aspects of neural models over directed multigraphs and develop a novel GNN protocol that overcomes the above challenges via efficiently incorporating directional information, as well as proposing an enhancement that targets edgerelated tasks using a novel message passing scheme over an extension of edgetonode dual graph. A concrete GNN architecture called GRANDE is derived using the proposed protocol, with several further improvements and generalizations to temporal dynamic graphs. We apply the GRANDE model to both a realworld antimoney laundering task and public datasets. Experimental evaluations show the superiority of the proposed GRANDE architecture over recent stateoftheart models on dynamic graph modeling and directed graph modeling.
NearField Terahertz Communications ModelBased and ModelFree Channel Estimation ; Terahertz THz band is expected to be one of the key enabling technologies of the sixth generation 6G wireless networks because of its abundant available bandwidth and very narrow beam width. Due to high frequency operations, electrically small array apertures are employed, and the signal wavefront becomes spherical in the nearfield. Therefore, nearfield signal model should be considered for channel acquisition in THz systems. Unlike prior works which mostly ignore the impact of nearfield beamsplit NB and consider either narrowband scenario or farfield models, this paper introduces both a modelbased and a modelfree techniques for wideband THz channel estimation in the presence of NB. The modelbased approach is based on orthogonal matching pursuit OMP algorithm, for which we design an NBaware dictionary. The key idea is to exploit the angular and range deviations due to the NB. We then employ the OMP algorithm, which accounts for the deviations thereby ipso facto mitigating the effect of NB. We further introduce a federated learning FLbased approach as a modelfree solution for channel estimation in a multiuser scenario to achieve reduced complexity and training overhead. Through numerical simulations, we demonstrate the effectiveness of the proposed channel estimation techniques for wideband THz systems in comparison with the existing stateoftheart techniques.
TextDefense Adversarial Text Detection based on Word Importance Entropy ; Currently, natural language processing NLP models are wildly used in various scenarios. However, NLP models, like all deep models, are vulnerable to adversarially generated text. Numerous works have been working on mitigating the vulnerability from adversarial attacks. Nevertheless, there is no comprehensive defense in existing works where each work targets a specific attack category or suffers from the limitation of computation overhead, irresistible to adaptive attack, etc. In this paper, we exhaustively investigate the adversarial attack algorithms in NLP, and our empirical studies have discovered that the attack algorithms mainly disrupt the importance distribution of words in a text. A welltrained model can distinguish subtle importance distribution differences between clean and adversarial texts. Based on this intuition, we propose TextDefense, a new adversarial example detection framework that utilizes the target model's capability to defend against adversarial attacks while requiring no prior knowledge. TextDefense differs from previous approaches, where it utilizes the target model for detection and thus is attack type agnostic. Our extensive experiments show that TextDefense can be applied to different architectures, datasets, and attack methods and outperforms existing methods. We also discover that the leading factor influencing the performance of TextDefense is the target model's generalizability. By analyzing the property of the target model and the property of the adversarial example, we provide our insights into the adversarial attacks in NLP and the principles of our defense method.
Models to support forest inventory and small area estimation using sparsely sampled LiDAR A case study involving GLiHT LiDAR in Tanana, Alaska ; A twostage hierarchical Bayesian model is proposed to estimate forest biomass density and total given sparsely sampled LiDAR and georeferenced forest inventory plot measurements. The model is motivated by the United States Department of Agriculture USDA Forest Service Forest Inventory and Analysis FIA objective to provide biomass estimates for the remote Tanana Inventory Unit TIU in interior Alaska. The proposed model yields stratumlevel biomass estimates for arbitrarily sized areas. Modelbased estimates are compared with the TIU FIA designbased poststratified estimates. Modelbased small area estimates SAEs for two experimental forests within the TIU are compared with each forest's designbased estimates generated using a dense network of independent inventory plots. Model parameter estimates and biomass predictions are informed using FIA plot measurements, LiDAR data that is spatially aligned with a subset of the FIA plots, and complete coverage remotely sensed data used to define landuselandcover stratum and percent forest canopy cover. Results support a modelbased approach to estimating forest variables when inventory data are sparse or resources limit collection of enough data to achieve desired accuracy and precision using designbased methods.
Gravitational waves from rmSUNrmSpNrm composite Higgs models ; We study possible strong firstorder electroweak phase transitions in Composite Higgs models and we quantify the part of parameter space that can be probed with future gravitational Wave experiments. We focus on models where the Composite Higgs sector arises from underlying fourdimensional strongly interacting gauge theories with fermions, and where the Standard Model fermion masses are induced via linear mixing terms with composite fermions the socalled fermion partial compositeness framework. We perform our analysis for the general class of Composite Higgs models arising from N Weyl fermions in a pseudoreal representation of the new strongly interacting gauge group that dynamically triggers the global chiral symmetry breaking pattern rmSUNrmrightarrow rmSpNrm. The minimal model has N4 and for N4 the models feature complex scalar dark matter candidates arising as pseudoNambuGoldstone bosons. We find a large number of points in the models parameter space which yield strong firstorder electroweak phase transitions and identify the most important operators characterizing the strength of the phase transition. Almost all of these points are testable with future GW detectors such as LISA, Taiji, Tianqin, BBO, DECIGO and UltimateDECIGO.
Angular correlations on causallycoherent inflationary horizons ; We develop a model for correlations of cosmic microwave background anisotropy on the largest angular scales, based on standard causal geometrical relationships in slowroll inflation. Unlike standard models based on quantized field modes, it describes perturbations with nonlocal directional coherence on spherical boundaries of causal diamonds. Causal constraints reduce the number of independent degrees of freedom, impose new angular symmetries, and eliminate cosmic variance for purely angular 2point correlations. Distortions of causal structure from vacuum fluctuations are modeled as gravitational memory from randomly oriented outgoing and incoming gravitational null shocks, with nonlocally coherent directional displacements on curved surfaces of causal diamonds formed by standard inflationary horizons. The angular distribution is determined by axially symmetric shock displacements on circular intersections of the comoving sphere that represents the CMB photosphere with other inflationary horizons. Displacements on thin spheres at the end of inflation have a unique angular power spectrum Cell that approximates the standard expectation on small angular scales, but differs substantially at large angular scales due to horizon curvature. For a thin sphere, the model predicts a universal angular correlation function CTheta with an exact causal shadow'' symmetry, Cpi4Theta3pi4 0, and significant largeangle parity violation. We apply a rank statistic to compare models with WMAP and Planck satellite data, and find that a causallycoherent model with no shape parameters or cosmic variance agrees with the measured CTheta better than a large fraction 0.9999 of standard model realizations. Modelindependent tests of holographic causal symmetries are proposed.
Clientspecific Property Inference against Secure Aggregation in Federated Learning ; Federated learning has become a widely used paradigm for collaboratively training a common model among different participants with the help of a central server that coordinates the training. Although only the model parameters or other model updates are exchanged during the federated training instead of the participant's data, many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data. Although differential privacy is considered an effective solution to protect against privacy attacks, it is also criticized for its negative effect on utility. Another possible defense is to use secure aggregation which allows the server to only access the aggregated update instead of each individual one, and it is often more appealing because it does not degrade model quality. However, combining only the aggregated updates, which are generated by a different composition of clients in every round, may still allow the inference of some clientspecific information. In this paper, we show that simple linear models can effectively capture clientspecific properties only from the aggregated model updates due to the linearity of aggregation. We formulate an optimization problem across different rounds in order to infer a tested property of every client from the output of the linear models, for example, whether they have a specific sample in their training data membership inference or whether they misbehave and attempt to degrade the performance of the common model by poisoning attacks. Our reconstruction technique is completely passive and undetectable. We demonstrate the efficacy of our approach on several scenarios which shows that secure aggregation provides very limited privacy guarantees in practice. The source code will be released upon publication.
BOSS Bones, Organs and Skin Shape Model ; Objective A digital twin of a patient can be a valuable tool for enhancing clinical tasks such as workflow automation, patientspecific Xray dose optimization, markerless tracking, positioning, and navigation assistance in imageguided interventions. However, it is crucial that the patient's surface and internal organs are of high quality for any pose and shape estimates. At present, the majority of statistical shape models SSMs are restricted to a small number of organs or bones or do not adequately represent the general population. Method To address this, we propose a deformable human shape and pose model that combines skin, internal organs, and bones, learned from CT images. By modeling the statistical variations in a posenormalized space using probabilistic PCA while also preserving joint kinematics, our approach offers a holistic representation of the body that can benefit various medical applications. Results We assessed our model's performance on a registered dataset, utilizing the unified shape space, and noted an average error of 3.6 mm for bones and 8.8 mm for organs. To further verify our findings, we conducted additional tests on publicly available datasets with multipart segmentations, which confirmed the effectiveness of our model. Conclusion This works shows that anatomically parameterized statistical shape models can be created accurately and in a computationally efficient manner. Significance The proposed approach enables the construction of shape models that can be directly applied to various medical applications, including biomechanics and reconstruction.
Tag2Text Guiding VisionLanguage Model via Image Tagging ; This paper presents Tag2Text, a vision language pretraining VLP framework, which introduces image tagging into visionlanguage models to guide the learning of visuallinguistic features. In contrast to prior works which utilize object tags either manually labeled or automatically detected with an offtheshelf detector with limited performance, our approach explicitly learns an image tagger using tags parsed from imagepaired text and thus provides a strong semantic guidance to visionlanguage models. In this way, Tag2Text can utilize largescale annotationfree image tags in accordance with imagetext pairs, and provides more diverse tag categories beyond objects. As a result, Tag2Text demonstrates the ability of a foundational image tagging model, with superior zeroshot performance even comparable to fully supervised models. Moreover, by leveraging the tagging guidance, Tag2Text effectively enhances the performance of visionlanguage models on both generationbased and alignmentbased tasks. Across a wide range of downstream benchmarks, Tag2Text achieves stateoftheart results with similar model sizes and data scales, demonstrating the efficacy of the proposed tagging guidance. Code, demo and pretrained models are available at urlhttpsgithub.comxinyu1205recognizeanything.
Boosting Adversarial Attacks by Leveraging Decision Boundary Information ; Due to the gap between a substitute model and a victim model, the gradientbased noise generated from a substitute model may have low transferability for a victim model since their gradients are different. Inspired by the fact that the decision boundaries of different models do not differ much, we conduct experiments and discover that the gradients of different models are more similar on the decision boundary than in the original position. Moreover, since the decision boundary in the vicinity of an input image is flat along most directions, we conjecture that the boundary gradients can help find an effective direction to cross the decision boundary of the victim models. Based on it, we propose a Boundary Fitting Attack to improve transferability. Specifically, we introduce a method to obtain a set of boundary points and leverage the gradient information of these points to update the adversarial examples. Notably, our method can be combined with existing gradientbased methods. Extensive experiments prove the effectiveness of our method, i.e., improving the success rate by 5.6 against normally trained CNNs and 14.9 against defense CNNs on average compared to stateoftheart transferbased attacks. Further we compare transformers with CNNs, the results indicate that transformers are more robust than CNNs. However, our method still outperforms existing methods when attacking transformers. Specifically, when using CNNs as substitute models, our method obtains an average attack success rate of 58.2, which is 10.8 higher than other stateoftheart transferbased attacks.
Adaptive Modeling of Uncertainties for Traffic Forecasting ; Deep neural networks DNNs have emerged as a dominant approach for developing traffic forecasting models. These models are typically trained to minimize error on averaged test cases and produce a singlepoint prediction, such as a scalar value for traffic speed or travel time. However, singlepoint predictions fail to account for prediction uncertainty that is critical for many transportation management scenarios, such as determining the best or worstcase arrival time. We present QuanTraffic, a generic framework to enhance the capability of an arbitrary DNN model for uncertainty modeling. QuanTraffic requires little human involvement and does not change the base DNN architecture during deployment. Instead, it automatically learns a standard quantile function during the DNN model training to produce a prediction interval for the singlepoint prediction. The prediction interval defines a range where the true value of the traffic prediction is likely to fall. Furthermore, QuanTraffic develops an adaptive scheme that dynamically adjusts the prediction interval based on the location and prediction window of the test input. We evaluated QuanTraffic by applying it to five representative DNN models for traffic forecasting across seven public datasets. We then compared QuanTraffic against five uncertainty quantification methods. Compared to the baseline uncertainty modeling techniques, QuanTraffic with base DNN architectures delivers consistently better and more robust performance than the existing ones on the reported datasets.
A electrochemicalelectrothermal coupled computational framework to simulate the performance of Liion batteries at celllevel Analysis on the thermal effects ; Accurately predicting the performance of Liion batteries is of great importance for the global electric vehicle and energy storage industries. In this research, we propose a computational framework that integrates the electrochemical DFN model, ECM parametrisation, 3D distributed ECN model to simulate the performance of Liion cells. Using Kokam 7.5 Ah pouch cell ModelSLPB75106100 as an example, we demonstrate the threestep workflow of the framework that consists of the characterisation data acquisition, parametrisation with BatPar, and 3D ECNsimulation with PyECN. With this framework, we simulate constant current discharge experiments in the literature and compare the simulations with DFN model that coupled with a classical lumped thermal model. With a better consideration of thermal process and its coupling effects with electrochemistry, the computational model outperforms DFN model, especially at lowtemperature andor high Crate scenarios. The largest predicting error of the framework at 3 Crate Tam 25oC and at 1 Crate Tam 0 oC is approximately 13 of that for DFN model. At 3 Crate Tam 5oC, the difference between these two can rise to 377 mV. Further analysis reveals that the lumped DFN thermal model is unsuitable to simulate the performance of Liion batteries at a scale larger than cell level, due to significant internal heat generation and large Biot number. By integrating DFN and 3Ddistributed ECN together, this proposed computational framework is electrochemicalelectrothermal coupled and can be used as a toolset by cell manufacturers and pack designers to predict, analyse, and optimise the performance of Libased energy storage systems.
CarbonOxygen Phase Separation in MESA White Dwarf Models ; We enhance the treatment of crystallization for models of white dwarfs WDs in the stellar evolution software MESA by implementing carbonoxygen CO phase separation. The phase separation process during crystallization leads to transport of oxygen toward the center of WDs, resulting in a more compact structure that liberates gravitational energy as additional heating that modestly slows WD cooling timescales. We quantify this cooling delay in MESA CO WD models over the mass range 0.51.0 Modot, finding delays of 0.50.8 Gyr for typical CO interior profiles. MESA WD cooling timescales including this effect are generally comparable to other WD evolution models that make similar assumptions about input physics. When considering phase separation alongside 22Ne sedimentation, however, we find that both MESA and BaSTI WD cooling models predict a more modest sedimentation delay than the latest LPCODE models, and this may therefore require a reevaluation of previously proposed solutions to some WD cooling anomalies that were based on LPCODE models of 22Ne sedimentation. Our implementation of CO phase separation in the opensource stellar evolution software MESA provides an important tool for building realistic grids of WD cooling models, as well as a framework for expanding on our implementation to explore additional physical processes related to phase transitions and associated fluid motions in WD interiors.
A generalpurpose AI assistant embedded in an opensource radiology information system ; Radiology AI models have made significant progress in nearhuman performance or surpassing it. However, AI model's partnership with human radiologist remains an unexplored challenge due to the lack of health information standards, contextual and workflow differences, and data labeling variations. To overcome these challenges, we integrated an AI model service that uses DICOM standard SR annotations into the OHIF viewer in the opensource LibreHealth Radiology Information Systems RIS. In this paper, we describe the novel HumanAI partnership capabilities of the platform, including fewshot learning and swarm learning approaches to retrain the AI models continuously. Building on the concept of machine teaching, we developed an active learning strategy within the RIS, so that the human radiologist can enabledisable AI annotations as well as fixrelabel the AI annotations. These annotations are then used to retrain the models. This helps establish a partnership between the radiologist user and a userspecific AI model. The weights of these userspecific models are then finally shared between multiple models in a swarm learning approach.
CurvatureBalanced Feature Manifold Learning for LongTailed Classification ; To address the challenges of longtailed classification, researchers have proposed several approaches to reduce model bias, most of which assume that classes with few samples are weak classes. However, recent studies have shown that tail classes are not always hard to learn, and model bias has been observed on samplebalanced datasets, suggesting the existence of other factors that affect model bias. In this work, we systematically propose a series of geometric measurements for perceptual manifolds in deep neural networks, and then explore the effect of the geometric characteristics of perceptual manifolds on classification difficulty and how learning shapes the geometric characteristics of perceptual manifolds. An unanticipated finding is that the correlation between the class accuracy and the separation degree of perceptual manifolds gradually decreases during training, while the negative correlation with the curvature gradually increases, implying that curvature imbalance leads to model bias. Therefore, we propose curvature regularization to facilitate the model to learn curvaturebalanced and flatter perceptual manifolds. Evaluations on multiple longtailed and nonlongtailed datasets show the excellent performance and exciting generality of our approach, especially in achieving significant performance improvements based on current stateoftheart techniques. Our work opens up a geometric analysis perspective on model bias and reminds researchers to pay attention to model bias on nonlongtailed and even samplebalanced datasets. The code and model will be made public.
A spatial measurevalued model for radiationinduced DNA damage kinetics and repair under protracted irradiation condition ; In the present work, we develop a general spatial stochastic model to describe the formation and repair of radiationinduced DNA damage. The model is described mathematically as a measurevalued particlebased stochastic system and extends in several directions the model developed in Cordoni et.al. 2021, Cordoni et.al. 2022a, Cordoni et.al. 2022b. In this new spatial formulation, radiationinduced DNA damage in the cell nucleus can undergo different pathways to either repair or lead to cell inactivation. The main novelty of the work is to rigorously define a spatial model that considers the pairwise interaction of lesions and continuous protracted irradiation. The former is relevant from a biological point of view as clustered lesions are less likely to be repaired, leading thus to cell inactivation. The latter instead describes the effects of a continuous radiation field on biological tissue. We prove the existence and uniqueness of a solution to the above stochastic systems, characterizing its probabilistic properties. We further couple the model describing the biological system to a set of reactiondiffusion equations with random discontinuity that model the chemical environment. At last, we study the large system limit of the process. The developed model can be applied to different contexts, with radiotherapy and space radioprotection being the most relevant. Further, the biochemical system derived can play a crucial role in understanding an extremely promising novel radiotherapy treatment modality, named in the community FLASH radiotherapy, whose mechanism is today largely unknown.
Gazeformer Scalable, Effective and Fast Prediction of GoalDirected Human Attention ; Predicting human gaze is important in HumanComputer Interaction HCI. However, to practically serve HCI applications, gaze prediction models must be scalable, fast, and accurate in their spatial and temporal gaze predictions. Recent scanpath prediction models focus on goaldirected attention search. Such models are limited in their application due to a common approach relying on trained target detectors for all possible objects, and the availability of human gaze data for their training both not scalable. In response, we pose a new task called ZeroGaze, a new variant of zeroshot learning where gaze is predicted for neverbeforesearched objects, and we develop a novel model, Gazeformer, to solve the ZeroGaze problem. In contrast to existing methods using object detector modules, Gazeformer encodes the target using a natural language model, thus leveraging semantic similarities in scanpath prediction. We use a transformerbased encoderdecoder architecture because transformers are particularly useful for generating contextual representations. Gazeformer surpasses other models by a large margin on the ZeroGaze setting. It also outperforms existing targetdetection models on standard gaze prediction for both targetpresent and targetabsent search tasks. In addition to its improved performance, Gazeformer is more than five times faster than the stateoftheart targetpresent visual search model.
Optimal MessagePassing with Noisy Beeps ; Beeping models are models for networks of weak devices, such as sensor networks or biological networks. In these networks, nodes are allowed to communicate only via emitting beeps unary pulses of energy. Listening nodes only the capability of it carrier sensing they can only distinguish between the presence or absence of a beep, but receive no other information. The noisy beeping model further assumes listening nodes may be disrupted by random noise. Despite this extremely restrictive communication model, it transpires that complex distributed tasks can still be performed by such networks. In this paper we provide an optimal procedure for simulating general message passing in the beeping and noisy beeping models. We show that a round of textsfBroadcast CONGEST can be simulated in ODeltalog n round of the noisy or noiseless beeping model, and a round of textsfCONGEST can be simulated in ODelta2log n rounds where Delta is the maximum degree of the network. We also prove lower bounds demonstrating that no simulation can use asymptotically fewer rounds. This allows a host of graph algorithms to be efficiently implemented in beeping models. As an example, we present an Olog nround textsfBroadcast CONGEST algorithm for maximal matching, which, when simulated using our method, immediately implies a nearoptimal ODelta log2 nround maximal matching algorithm in the noisy beeping model.
EMShepherd Detecting Adversarial Samples via Sidechannel Leakage ; Deep Neural Networks DNN are vulnerable to adversarial perturbationssmall changes crafted deliberately on the input to mislead the model for wrong predictions. Adversarial attacks have disastrous consequences for deep learningempowered critical applications. Existing defense and detection techniques both require extensive knowledge of the model, testing inputs, and even execution details. They are not viable for general deep learning implementations where the model internal is unknown, a common 'blackbox' scenario for model users. Inspired by the fact that electromagnetic EM emanations of a model inference are dependent on both operations and data and may contain footprints of different input classes, we propose a framework, EMShepherd, to capture EM traces of model execution, perform processing on traces and exploit them for adversarial detection. Only benign samples and their EM traces are used to train the adversarial detector a set of EM classifiers and classspecific unsupervised anomaly detectors. When the victim model system is under attack by an adversarial example, the model execution will be different from executions for the known classes, and the EM trace will be different. We demonstrate that our airgapped EMShepherd can effectively detect different adversarial attacks on a commonly used FPGA deep learning accelerator for both Fashion MNIST and CIFAR10 datasets. It achieves a 100 detection rate on most types of adversarial samples, which is comparable to the stateoftheart 'whitebox' softwarebased detectors.
Propagation and Fluxes of Ultra High Energy Cosmic Rays in fR Gravity Theory ; In this work we study the effect of diffusion of ultra high energy UHE particles in presence of turbulent magnetic fields TMFs in the light of fR theory of gravity. The fR theory of gravity is a successful modified theory of gravity in explaining the various aspects of the observable Universe including its current state of expansion. For this work we consider two most studied fR gravity models, viz., the powerlaw model and the Starobinsky model. With these two models we study the diffusive character of propagation of UHE cosmic ray UHECR protons in terms of their density enhancement. The GreisenZatsepinKuzmin GZK cutoff, the dip and the bump are all spectrum characteristics that UHE extragalactic protons acquire when they propagate through the cosmic microwave background CMB radiation in presence of TMFs. We analyse all these characteristics through the diffusive flux as well as its modification factor. Model dependence of the modification factor is minimal compared to the diffusive flux. We compare the UHECR protons spectra that are calculated for the considered fR gravity models with the available data of the AKENOAGASA, HiRes, AUGER and YAKUTSK experiments of UHECRs. We see that both the models of fR gravity provide the energy spectra of UHECRs with all experimentally observed features, which lay well within the range of combine data of all experiments throughout the energy range of concern, in contrast to the case of the LambdaCDM model.
Computationally efficient sampling methods for sparsity promoting hierarchical Bayesian models ; Bayesian hierarchical models have been demonstrated to provide efficient algorithms for finding sparse solutions to illposed inverse problems. The models comprise typically a conditionally Gaussian prior model for the unknown, augmented by a hyperprior model for the variances. A widely used choice for the hyperprior is a member of the family of generalized gamma distributions. Most of the work in the literature has concentrated on numerical approximation of the maximum a posteriori MAP estimates, and less attention has been paid on sampling methods or other means for uncertainty quantification. Sampling from the hierarchical models is challenging mainly for two reasons The hierarchical models are typically highdimensional, thus suffering from the curse of dimensionality, and the strong correlation between the unknown of interest and its variance can make sampling rather inefficient. This work addresses mainly the first one of these obstacles. By using a novel reparametrization, it is shown how the posterior distribution can be transformed into one dominated by a Gaussian white noise, allowing sampling by using the preconditioned CrankNicholson pCN scheme that has been shown to be efficient for sampling from distributions dominated by a Gaussian component. Furthermore, a novel idea for speeding up the pCN in a special case is developed, and the question of how strongly the hierarchical models are concentrated on sparse solutions is addressed in light of a computed example.