text
stringlengths
62
2.94k
Small area estimation of general finitepopulation parameters based on grouped data ; This paper proposes a new modelbased approach to small area estimation of general finitepopulation parameters based on grouped data or frequency data, which is often available from sample surveys. Grouped data contains information on frequencies of some prespecified groups in each area, for example the numbers of households in the income classes, and thus provides more detailed insight about small areas than arealevel aggregated data. A direct application of the widely used small area methods, such as the FayHerriot model for arealevel data and nested error regression model for unitlevel data, is not appropriate since they are not designed for grouped data. The newly proposed method adopts the multinomial likelihood function for the grouped data. In order to connect the group probabilities of the multinomial likelihood and the auxiliary variables within the framework of small area estimation, we introduce the unobserved unitlevel quantities of interest which follows the linear mixed model with the random intercepts and dispersions after some transformation. Then the probabilities that a unit belongs to the groups can be derived and are used to construct the likelihood function for the grouped data given the random effects. The unknown model parameters hyperparameters are estimated by a newly developed Monte Carlo EM algorithm using an efficient importance sampling. The empirical best predicts empirical Bayes estimates of small area parameters can be calculated by a simple Gibbs sampling algorithm. The numerical performance of the proposed method is illustrated based on the modelbased and designbased simulations. In the application to the city level grouped income data of Japan, we complete the patchy maps of the Gini coefficient as well as mean income across the country.
On the structure of the emergent 3d expanding space in the Lorentzian type IIB matrix model ; The emergence of 31dimensional expanding spacetime in the Lorentzian type IIB matrix model is an intriguing phenomenon which was observed in Monte Carlo studies of this model. In particular, this may be taken as a support to the conjecture that the model is a nonperturbative formulation of superstring theory in 91 dimensions. In this paper we investigate the spacetime structure of the matrices generated by simulating this model and its simplified versions, and find that the expanding part of the space is described essentially by the Pauli matrices. We argue that this is due to an approximation used in the simulation to avoid the sign problem, which actually amounts to replacing eiSrm b by ebeta Srm b beta0 in the partition function, where Srm b is the bosonic part of the action. We also discuss the possibility of obtaining a regular spacetime with the 31dimensional expanding behavior in the original model with the correct eiSrm b factor.
Minimal Dirac Neutrino Mass Models from U1R Gauge Symmetry and LeftRight Asymmetry at Colliders ; In this work, we propose minimal realizations for generating Dirac neutrino masses in the context of a righthanded abelian gauge extension of the Standard Model. Utilizing only U1R symmetry, we address and analyze the possibilities of Dirac neutrino mass generation via a textittreelevel seesaw and b textitradiative correction at the oneloop level. One of the presented radiative models implements the attractive textitscotogenic model that links neutrino mass with Dark Matter DM, where the stability of the DM is guaranteed from a residual discrete symmetry emerging from U1R. Since only the righthanded fermions carry nonzero charges under the U1R, this framework leads to sizable and distinctive LeftRight asymmetry as well as ForwardBackward asymmetry discriminating from U1BL models and can be tested at the colliders. We analyze the current experimental bounds and present the discovery reach limits for the new heavy gauge boson Zprime at the LHC and ILC. Furthermore, we also study the associated charged lepton flavor violating processes, dark matter phenomenology and cosmological constraints of these models.
Modulating Image Restoration with Continual Levels via Adaptive Feature Modification Layers ; In image restoration tasks, like denoising and super resolution, continual modulation of restoration levels is of great importance for realworld applications, but has failed most of existing deep learning based image restoration methods. Learning from discrete and fixed restoration levels, deep models cannot be easily generalized to data of continuous and unseen levels. This topic is rarely touched in literature, due to the difficulty of modulating welltrained models with certain hyperparameters. We make a step forward by proposing a unified CNN framework that consists of few additional parameters than a singlelevel model yet could handle arbitrary restoration levels between a start and an end level. The additional module, namely AdaFM layer, performs channelwise feature modification, and can adapt a model to another restoration level with high accuracy. By simply tweaking an interpolation coefficient, the intermediate model AdaFMNet could generate smooth and continuous restoration effects without artifacts. Extensive experiments on three image restoration tasks demonstrate the effectiveness of both model training and modulation testing. Besides, we carefully investigate the properties of AdaFM layers, providing a detailed guidance on the usage of the proposed method.
Xray Lightcurves from Realistic Polar Cap Models Inclined Pulsar Magnetospheres and Multipole Fields ; Thermal Xray emission from rotationpowered pulsars is believed to originate from localized hotspots on the stellar surface occurring where largescale currents from the magnetosphere return to heat the atmosphere. Lightcurve modeling has primarily been limited to simple models, such as circular antipodal emitting regions with constant temperature. We calculate more realistic temperature distributions within the polar caps, taking advantage of recent advances in magnetospheric theory, and we consider their effect on the predicted lightcurves. The emitting regions are noncircular even for a pure dipole magnetic field, and the inclusion of an aligned magnetic quadrupole moment introduces a northsouth asymmetry. As the aligned quadrupole moment is increased, one hotspot grows in size before becoming a thin ring surrounding the star. For the pure dipole case, moving to the more realistic model changes the lightcurves by 510 for millisecond pulsars, helping to quantify the systematic uncertainty present in current dipolar models. Including the quadrupole gives considerable freedom in generating more complex lightcurves. We explore whether these simple dipolequadrupole models can account for the qualitative features of the lightcurve of PSR J04374715.
QuestionAgnostic Attention for Visual Question Answering ; Visual Question Answering VQA models employ attention mechanisms to discover image locations that are most relevant for answering a specific question. For this purpose, several multimodal fusion strategies have been proposed, ranging from relatively simple operations e.g., linear sum to more complex ones e.g., Block. The resulting multimodal representations define an intermediate feature space for capturing the interplay between visual and semantic features, that is helpful in selectively focusing on image content. In this paper, we propose a questionagnostic attention mechanism that is complementary to the existing questiondependent attention mechanisms. Our proposed model parses object instances to obtain an object map' and applies this map on the visual features to generate QuestionAgnostic Attention QAA features. In contrast to questiondependent attention approaches that are learned endtoend, the proposed QAA does not involve questionspecific training, and can be easily included in almost any existing VQA model as a generic lightweight preprocessing step, thereby adding minimal computation overhead for training. Further, when used in complement with the questiondependent attention, the QAA allows the model to focus on the regions containing objects that might have been overlooked by the learned attention representation. Through extensive evaluation on VQAv1, VQAv2 and TDIUC datasets, we show that incorporating complementary QAA allows stateoftheart VQA models to perform better, and provides significant boost to simplistic VQA models, enabling them to performance on par with highly sophisticated fusion strategies.
F2F, a modelindependent method to determine the mass and width of a particle in the presence of interference ; It is generally believed that any particle to be discovered will have a TeVorder mass. Given its great mass, it must have a large decay width. Therefore, the interference effect will be very common if they and the StandardModel SM particles contribute to the same final state.However, the interference effect could make a new particle show up not like a resonance, and it is difficult to search and measure its properties. In this work, a modelindependent method, F2F Fit To Fourier coefficients, is proposed to search for an unknown resonance and to determine its mass M and width Gamma in the presence of interference. Basically we express the sum of reosnant signal and the interference as a cosine Fourier series and relate the Fourier coefficients with the mass and width. The relation is based on the general propagator form, 1x2M2iMGamma. Thus it does not need any signal model. Toy experiments show that the obtained mass and width agree well with the inputs with a similar precision as using an explicit signal model. We also show that we can apply this method to measure the StardardModel Higgs width and to make statistic interpretation in searching for new resonance allowing for interference.
Local Score Dependent Model Explanation for Time Dependent Covariates ; The use of deep neural networks to make high risk decisions creates a need for global and local explanations so that users and experts have confidence in the modeling algorithms. We introduce a novel technique to find global and local explanations for time series data used in binary classification machine learning systems. We identify the most salient of the original features used by a black box model to distinguish between classes. The explanation can be made on categorical, continuous, and time series data and can be generalized to any binary classification model. The analysis is conducted on time series data to train a long shortterm memory deep neural network and uses the time dependent structure of the underlying features in the explanation. The proposed technique attributes weights to features to explain an observations risk of belonging to a class as a multiplicative factor of a base hazard rate. We use a variation of the Cox Proportional Hazards regression, a Generalized Additive Model, to explain the effect of variables upon the probability of an inclass response for a score output from the black box model. The covariates incorporate time dependence structure in the features so the explanation is inclusive of the underlying time series data structure.
The Haldane model and its localization dichotomy ; Gapped periodic quantum systems exhibit an interesting Localization Dichotomy, which emerges when one looks at the localization of the optimally localized Wannier functions associated to the Bloch bands below the gap. As recently proved, either these Wannier functions are exponentially localized, as it happens whenever the Hamiltonian operator is timereversal symmetric, or they are delocalized in the sense that the expectation value of mathbfx2 diverges. Intermediate regimes are forbidden. Following the lesson of our Maestro, to whom this contribution is gratefully dedicated, we find useful to explain this subtle mathematical phenomenon in the simplest possible model, namely the discrete model proposed by Haldane Phys. Rev. Lett. 61, 2017 1988. We include a pedagogical introduction to the model and we explain its Localization Dichotomy by explicit analytical arguments. We then introduce the reader to the more general, modelindependent version of the dichotomy proved in Commun. Math. Phys. 359, 61100 2018, and finally we announce further generalizations to nonperiodic models.
CMB targets after the latest Planck data release ; We show that a combination of the simplest alphaattractors and KKLTI models related to Dpbrane inflation covers most of the area in the ns, r space favored by Planck 2018. For alphaattractor models, there are discrete targets 3alpha1,2,...,7, predicting 7 different values of r 12alphaN2 in the range 102 gtrsim r gtrsim 103. In the small r limit, alphaattractors and Dpbrane inflation models describe vertical betastripes in the ns, r space, with ns1betaN, beta2, 5over 3,8over 5, 3over 2,4over 3. A phenomenological description of these models and their generalizations can be achieved in the context of pole inflation. Most of the 1sigma area in the ns, r space favored by Planck 2018 can be covered models with beta 2 and beta 53. Future precision data on ns may help to discriminate between these models even if the precision of the measurement of r is insufficient for the discovery of gravitational waves produced during inflation.
An Integrable Model for the Dynamics of Planetary Mean Motion Resonances ; I consider the dynamics of mean motion resonances between pairs of coplanar planets and derive a new integrable Hamiltonian model for planets' resonant motion. The new model generalizes previouslyderived integrable Hamiltonians for firstorder resonances to treat higherorder resonances by exploiting a surprising nearsymmetry of the full, nonintegrable Hamiltonians of higherorder resonances. Whereas past works have frequently relied on truncated disturbing function expansions to derive integrable approximations to resonant motion, I show that no such expansion is necessary, thus enabling the new model to accurately capture the dynamics of both first and higherorder resonances for eccentricities up to orbitcrossing. I demonstrate that predictions of the new integrable model agree well with numerical integrations of resonant planet pairs. Finally, I explore the secular evolution of resonant planets' eccentricities. I show that the secular dynamics are governed by conservation of an AMDlike quantity. I also demonstrate that secular frequencies depend on planets' resonant libration amplitude and this generally gives rise to a secular resonance inside the mean motion resonance at large libration amplitudes. Outside of the secular resonance the longterm dynamics are characterized small adiabatic modulations of the resonant motion while inside the secular resonance planets can experience large variations of the resonant trajectory over secular timescales. The integrable model derived in this work can serve as a framework for analyzing the dynamics of planetary MMRs in a wide variety of contexts.
What is the amplitude of the Gravitational Waves background expected in the Starobinsky model ; The inflationary model proposed by Starobinski in 1979 predicts an amplitude of the spectrum of primordial gravitational waves, parametrized by the tensor to scalar ratio, of r0.0037 in case of a scalar spectral index of nS0.965. This amplitude is currently used as a target value in the design of future CMB experiments with the ultimate goal of measuring it at more than five standard deviations. Here we evaluate how stable are the predictions of the Starobinski model on r considering the experimental uncertainties on nS and the assumption of LambdaCDM. We also consider inflationary models where the R2 term in Starobinsky action is generalized to a R2p term with index p close to unity. We found that current data place a lower limit of r0.0013 at 95 C.L. for the classic Starobinski model, and predict also a running of the scalar index different from zero at more than three standard deviation in the range dndlnk0.00060.00010.0002. A level of gravitational waves of rsim0.001 is therefore possible in the Starobinski scenario and it will not be clearly detectable by future CMB missions as LiteBIRD and CMBS4. When assuming a more general R2p inflation we found no expected lower limit on r, and a running consistent with zero. We found that current data are able to place a tight constraints on the index of R2p models at 95 C.L. i.e. p 0.990.020.03.
FreeLB Enhanced Adversarial Training for Natural Language Understanding ; Adversarial training, which minimizes the maximal risk for labelpreserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformerbased models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERTbase model from 78.3 to 79.4, and RoBERTalarge model from 88.5 to 88.8. In addition, the proposed approach achieves stateoftheart singlemodel test accuracies of 85.44 and 67.75 on ARCEasy and ARCChallenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTalarge model on other tasks as well. Code is available at urlhttpsgithub.comzhuchen03FreeLB .
Demand forecasting in supply chain The impact of demand volatility in the presence of promotion ; The demand for a particular product or service is typically associated with different uncertainties that can make them volatile and challenging to predict. Demand unpredictability is one of the managers' concerns in the supply chain that can cause large forecasting errors, issues in the upstream supply chain and impose unnecessary costs. We investigate 843 real demand time series with different values of coefficient of variations CoV where promotion causes volatility over the entire demand series. In such a case, forecasting demand for different CoV require different models to capture the underlying behavior of demand series and pose significant challenges due to very different and diverse demand behavior. We decompose demand into baseline and promotional demand and propose a hybrid model to forecast demand. Our results indicate that our proposed hybrid model generates robust and accurate forecast across series with different levels of volatilities. We stress the necessity of decomposition for volatile demand series. We also model demand series with a number of well known statistical and machine learning ML models to investigate their performance empirically. We found that ARIMA with covariate ARIMAX works well to forecast volatile demand series, but exponential smoothing with covariate ETSX has a poor performance. Support vector regression SVR and dynamic linear regression DLR models generate robust forecasts across different categories of demands with different CoV values.
Neural Word Decomposition Models for Abusive Language Detection ; User generated text on social media often suffers from a lot of undesired characteristics including hatespeech, abusive language, insults etc. that are targeted to attack or abuse a specific group of people. Often such text is written differently compared to traditional text such as news involving either explicit mention of abusive words, obfuscated words and typological errors or implicit abuse i.e., indicating or targeting negative stereotypes. Thus, processing this text poses several robustness challenges when we apply natural language processing techniques developed for traditional text. For example, using word or token based models to process such text can treat two spelling variants of a word as two different words. Following recent work, we analyze how character, subword and byte pair encoding BPE models can be aid some of the challenges posed by user generated text. In our work, we analyze the effectiveness of each of the above techniques, compare and contrast various word decomposition techniques when used in combination with others. We experiment with finetuning large pretrained language models, and demonstrate their robustness to domain shift by studying Wikipedia attack, toxicity and Twitter hatespeech datasets
Action rigidity for free products of hyperbolic manifold groups ; Two groups have a common model geometry if they act properly and cocompactly by isometries on the same proper geodesic metric space. The MilnorSchwarz lemma implies that groups with a common model geometry are quasiisometric; however, the converse is false in general. We consider free products of uniform lattices in isometry groups of rank1 symmetric spaces and prove, within each quasiisometry class, residually finite groups that have a common model geometry are abstractly commensurable. Our result gives the first examples of hyperbolic groups that are quasiisometric but do not virtually have a common model geometry. Indeed, each quasiisometry class contains infinitely many abstract commensurability classes. We prove that two free products of closed hyperbolic surface groups have a common model geometry if and only if the groups are isomorphic. This result combined with a commensurability classification of Whyte yields the first examples of torsionfree abstractly commensurable hyperbolic groups that do not have a common model geometry. An important component of the proof is a generalization of Leighton's graph covering theorem. The main theorem depends on residual finiteness, and we show that finite extensions of uniform lattices in rank1 symmetric spaces that are not residually finite would give counterexamples.
Tractable Minorfree Generalization of Planar Zerofield Ising Models ; We present a new family of zerofield Ising models over N binary variablesspins obtained by consecutive gluing of planar and O1sized components and subsets of at most three vertices into a tree. The polynomialtime algorithm of the dynamic programming type for solving exact inference computing partition function and exact sampling generating i.i.d. samples consists in a sequential application of an efficient for planar or bruteforce for O1sized inference and sampling to the components as a black box. To illustrate the utility of the new family of tractable graphical models, we first build a polynomial algorithm for inference and sampling of zerofield Ising models over K3,3minorfree topologies and over K5minorfree topologies both are extensions of the planar zerofield Ising models which are neither genus nor treewidthbounded. Second, we demonstrate empirically an improvement in the approximation quality of the NPhard problem of inference over the squaregrid Ising model in a nodedependent nonzero magnetic field.
Feedback Linearization for Unknown Systems via Reinforcement Learning ; We present a novel approach to control design for nonlinear systems which leverages modelfree policy optimization techniques to learn a linearizing controller for a physical plant with unknown dynamics. Feedback linearization is a technique from nonlinear control which renders the inputoutput dynamics of a nonlinear plant emphlinear under application of an appropriate feedback controller. Once a linearizing controller has been constructed, desired output trajectories for the nonlinear plant can be tracked using a variety of linear control techniques. However, the calculation of a linearizing controller requires a precise dynamics model for the system. As a result, modelbased approaches for learning exact linearizing controllers generally require a simple, highly structured model of the system with easily identifiable parameters. In contrast, the modelfree approach presented in this paper is able to approximate the linearizing controller for the plant using general function approximation architectures. Specifically, we formulate a continuoustime optimization problem over the parameters of a learned linearizing controller whose optima are the set of parameters which best linearize the plant. We derive conditions under which the learning problem is strongly convex and provide guarantees which ensure the true linearizing controller for the plant is recovered. We then discuss how modelfree policy optimization algorithms can be used to solve a discretetime approximation to the problem using data collected from the realworld plant. The utility of the framework is demonstrated in simulation and on a realworld robotic platform.
One Network to Segment Them All A General, Lightweight System for Accurate 3D Medical Image Segmentation ; Many recent medical segmentation systems rely on powerful deep learning models to solve highly specific tasks. To maximize performance, it is standard practice to evaluate numerous pipelines with varying model topologies, optimization parameters, pre postprocessing steps, and even model cascades. It is often not clear how the resulting pipeline transfers to different tasks. We propose a simple and thoroughly evaluated deep learning framework for segmentation of arbitrary medical image volumes. The system requires no taskspecific information, no human interaction and is based on a fixed model topology and a fixed hyperparameter set, eliminating the process of model selection and its inherent tendency to cause methodlevel overfitting. The system is available in open source and does not require deep learning expertise to use. Without taskspecific modifications, the system performed better than or similar to highly specialized deep learning methods across 3 separate segmentation tasks. In addition, it ranked 5th and 6th in the first and second round of the 2018 Medical Segmentation Decathlon comprising another 10 tasks. The system relies on multiplanar data augmentation which facilitates the application of a single 2D architecture based on the familiar UNet. Multiplanar training combines the parameter efficiency of a 2D fully convolutional neural network with a systematic train and testtime augmentation scheme, which allows the 2D model to learn a representation of the 3D image volume that fosters generalization.
Quasiperiodic dynamical quantum phase transitions in multiband topological insulators and connections with entanglement entropy and fidelity susceptibility ; We investigate the Loschmidt amplitude and dynamical quantum phase transitions in multiband one dimensional topological insulators. For this purpose we introduce a new solvable multiband model based on the SuSchriefferHeeger model, generalized to unit cells containing many atoms but with the same symmetry properties. Such models have a richer structure of dynamical quantum phase transitions than the simple twoband topological insulator models typically considered previously, with both quasiperiodic and aperiodic dynamical quantum phase transitions present. Moreover the aperiodic transitions can still occur for quenches within a single topological phase. We also investigate the boundary contributions from the presence of the topologically protected edge states of this model. Plateaus in the boundary return rate are related to the topology of the time evolving Hamiltonian, and hence to a dynamical bulkboundary correspondence. We go on to consider the dynamics of the entanglement entropy generated after a quench, and its potential relation to the critical times of the dynamical quantum phase transitions. Finally, we investigate the fidelity susceptibility as an indicator of the topological phase transitions, and find a simple scaling law as a function of the number of bands of our multiband model which is found to be the same for both bulk and boundary fidelity susceptibilities.
TANDA Transfer and Adapt PreTrained Transformer Models for Answer Sentence Selection ; We propose TANDA, an effective technique for finetuning pretrained Transformer models for natural language tasks. Specifically, we first transfer a pretrained model into a model for a general task by finetuning it with a large and highquality dataset. We then perform a second finetuning step to adapt the transferred model to the target domain. We demonstrate the benefits of our approach for answer sentence selection, which is a wellknown inference task in Question Answering. We built a large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. Our approach establishes the state of the art on two wellknown benchmarks, WikiQA and TRECQA, achieving MAP scores of 92 and 94.3, respectively, which largely outperform the previous highest scores of 83.4 and 87.5, obtained in very recent work. We empirically show that TANDA generates more stable and robust models reducing the effort required for selecting optimal hyperparameters. Additionally, we show that the transfer step of TANDA makes the adaptation step more robust to noise. This enables a more effective use of noisy datasets for finetuning. Finally, we also confirm the positive impact of TANDA in an industrial setting, using domain specific datasets subject to different types of noise.
Selftraining with Noisy Student improves ImageNet classification ; We present Noisy Student Training, a semisupervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4 top1 accuracy on ImageNet, which is 2.0 better than the stateoftheart model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNetA top1 accuracy from 61.0 to 83.7, reduces ImageNetC mean corruption error from 45.7 to 28.3, and reduces ImageNetP mean flip rate from 27.8 to 12.2. Noisy Student Training extends the idea of selftraining and distillation with the use of equalorlarger student models and noise added to the student during learning. On ImageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger EfficientNet as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. Models are available at httpsgithub.comtensorflowtputreemastermodelsofficialefficientnet. Code is available at httpsgithub.comgoogleresearchnoisystudent.
Feedback Motion Planning for LongRange Autonomous Underwater Vehicles ; Ocean ecosystems have spatiotemporal variability and dynamic complexity that require a longterm deployment of an autonomous underwater vehicle for data collection. A new longrange autonomous underwater vehicle called Tethys is adapted to study different oceanic phenomena. Additionally, an ocean environment has external forces and moments along with changing water currents which are generally not considered in a vehicle kinematic model. In this scenario, it is not enough to generate a simple trajectory from an initial location to a goal location in an uncertain ocean as the vehicle can deviate from its intended trajectory. As such, we propose to compute a feedback plan that adapts the vehicle trajectory in the presence of any modeled or unmodeled uncertainties. In this work, we present a feedback motion planning method for the Tethys vehicle by combining a predictive ocean model and its kinematic modeling. Given a goal location, the Tethys kinematic model, and the water flow pattern, our method computes a feedback plan for the vehicle in a dynamic ocean environment that reduces its energy consumption. The computed feedback plan provides the optimal action for the Tethys vehicle to take from any location of the environment to reach the goal location considering its orientation. Our results based on actual ocean model prediction data demonstrate the applicability of our method.
Higherorder topological insulators and semimetals in generalized AubryAndreHarper models ; Higherorder topological phases of matter have been extensively studied in various areas of physics. While the AubryAndr'eHarper model provides a paradigmatic example to study topological phases, it has not been explored whether a generalized AubryAndr'eHarper model can exhibit a higherorder topological phenomenon. Here, we construct a twodimensional higherorder topological insulator with chiral symmetry based on the AubryAndr'eHarper model. We find the coexistence of zeroenergy and nonzero energy cornerlocalized modes. The former is protected by the quantized quadrupole moment, while the latter by the first Chern number of the Wannier band. The nonzeroenergy mode can also be viewed as the consequence of a Chern insulator localized on a surface. More interestingly, the nonzero energy corner mode can lie in the continuum of extended bulk states and form a bound state in the continuum of higherorder topological systems. We finally propose an experimental scheme to realize our model in electric circuits. Our study opens a door to further study higherorder topological phases based on the AubryAndr'eHarper model.
Reheating in R2 Palatini inflationary models ; We consider R2 inflation in the Palatini gravity assuming the existence of scalar fields, coupled to gravity in the most general manner. These theories, in the Einstein frame, and for one scalar field h, share common features with K inflation models. We apply this formalism for the study of popular inflationary models, whose potentials are monomials, V sim hn , with n a positive even integer. We also study the Higgs model nonminimally coupled to gravity. Although these have been recently studied, in the framework of the Palatini approach, we show that the scalar power spectrum severely constrains these models. Although we do not propose a particular reheating mechanism, we show that the quadratic sim h2 and the Higgs model can survive these constraints with a maximum reheating temperature as large as sim 1015 , GeV, when reheating is instantaneous. However, this can be only attained at the cost of a delicate finetuning of couplings. Deviations from this finetuned values can still yield predictions compatible with the cosmological data, for couplings that lie in very tight range, giving lower reheating temperatures.
FairyTED A Fair Rating Predictor for TED Talk Data ; With the recent trend of applying machine learning in every aspect of human life, it is important to incorporate fairness into the core of the predictive algorithms. We address the problem of predicting the quality of public speeches while being fair with respect to sensitive attributes of the speakers, e.g. gender and race. We use the TED talks as an input repository of public speeches because it consists of speakers from a diverse community and has a wide outreach. Utilizing the theories of Causal Models, Counterfactual Fairness and stateoftheart neural language models, we propose a mathematical framework for fair prediction of the public speaking quality. We employ grounded assumptions to construct a causal model capturing how different attributes affect public speaking quality. This causal model contributes in generating counterfactual data to train a fair predictive model. Our framework is general enough to utilize any assumption within the causal model. Experimental results show that while prediction accuracy is comparable to recent work on this dataset, our predictions are counterfactually fair with respect to a novel metric when compared to true data labels. The FairyTED setup not only allows organizers to make informed and diverse selection of speakers from the unobserved counterfactual possibilities but it also ensures that viewers and new users are not influenced by unfair and unbalanced ratings from arbitrary visitors to the www.ted.com website when deciding to view a talk.
Universal spectral features of different classes of random diffusivity processes ; .Stochastic models based on random diffusivities, such as the diffusingdiffusivity approach, are popular concepts for the description of nonGaussian diffusion in heterogeneous media. Studies of these models typically focus on the moments and the displacement probability density function. Here we develop the complementary power spectral description for a broad class of random diffusivity processes. In our approach we cater for typical single particle tracking data in which a small number of trajectories with finite duration are garnered. Apart from the diffusingdiffusivity model we study a range of previously unconsidered random diffusivity processes, for which we obtain exact forms of the probability density function. These new processes are different versions of jump processes as well as functionals of Brownian motion. The resulting behaviour subtly depends on the specific model details. Thus, the central part of the probability density function may be Gaussian or nonGaussian, and the tails may assume Gaussian, exponential, lognormal or even powerlaw forms. For all these models we derive analytically the momentgenerating function for the singletrajectory power spectral density. We establish the generic 1f2scaling of the power spectral density as function of frequency in all cases. Moreover, we establish the probability density for the amplitudes of the random power spectral density of individual trajectories. The latter functions reflect the very specific properties of the different random diffusivity models considered here. Our exact results are in excellent agreement with extensive numerical simulations.
Interference effects in dilepton resonance searches for Z' bosons and dark matter mediators ; New Z' gauge bosons arise in many extensions of the Standard Model and predict resonances in the dilepton invariant mass spectrum. Searches for such resonances therefore provide important constraints on many models of new physics, but the resulting bounds are often calculated without interference effects. In this work we show that the effect of interference is significant and cannot be neglected whenever the Z' width is large for example because of an invisible contribution. To illustrate this point, we implement and validate the most recent 139 fb1 dilepton search from ATLAS and obtain exclusion limits on general Z' models as well as on simplified dark matter models with spin1 mediators. We find that interference can substantially strengthen the bound on the Z' couplings and push exclusion limits for dark matter simplified models to higher values of the Z' mass. Together with this study we release the opensource code ZPEED, which provides fast likelihoods and exclusion bounds for general Z' models.
SPIN A High Speed, High Resolution Vision Dataset for Tracking and Action Recognition in Ping Pong ; We introduce a new high resolution, high frame rate stereo video dataset, which we call SPIN, for tracking and action recognition in the game of ping pong. The corpus consists of ping pong play with three main annotation streams that can be used to learn tracking and action recognition models tracking of the ping pong ball and poses of humans in the videos and the spin of the ball being hit by humans. The training corpus consists of 53 hours of data with labels derived from previous models in a semisupervised method. The testing corpus contains 1 hour of data with the same information, except that crowd compute was used to obtain human annotations of the ball position, from which ball spin has been derived. Along with the dataset we introduce several baseline models that were trained on this data. The models were specifically chosen to be able to perform inference at the same rate as the images are generated specifically 150 fps. We explore the advantages of multitask training on this data, and also show interesting properties of ping pong ball trajectories that are derived from our observational data, rather than from prior physics models. To our knowledge this is the first large scale dataset of ping pong; we offer it to the community as a rich dataset that can be used for a large variety of machine learning and vision tasks such as tracking, pose estimation, semisupervised and unsupervised learning and generative modeling.
Constraining domain wall dark matter with a network of superconducting gravimeters and LIGO ; There is strong astrophysical evidence that dark matter DM makes up some 27 of all mass in the universe. Yet, beyond gravitational interactions, little is known about its properties or how it may connect to the Standard Model. Multiple frameworks have been proposed, and precision measurements at low energy have proven useful to help restrict the parameter space for many of these models. One set of models predicts that DM is a scalar field that clumps into regions of high local density, rather than being uniformly distributed throughout the galaxy. If this DM field couples to the Standard Model, its interaction with matter can be thought of as changing the effective values of fundamental constants. One generic consequence of time variation of fundamental constants or their spatial variation as the Earth passes through regions of varying density is the presence of an anomalous, compositiondependent acceleration. Here we show how this anomalous acceleration can be measured using superconducting accelerometers, and demonstrate that 20 years of archival data from the International Geodynamics and Earth Tide Services IGETS network can be utilized to set new bounds on these models. Furthermore, we show how LIGO and other gravitational wave detectors can be used as exquisitely sensitive probes for narrow ranges of the parameter space. While limited to DM models that feature spatial gradients, these two techniques complement the networks of precision measurement devices already in use for direct detection and identification of dark matter.
Boltzmann Exploration ExpectationMaximisation ; We present a general method for fitting finite mixture models FMM. Learning in a mixture model consists of finding the most likely cluster assignment for each datapoint, as well as finding the parameters of the clusters themselves. In many mixture models, this is difficult with current learning methods, where the most common approach is to employ monotone learning algorithms e.g. the conventional expectationmaximisation algorithm. While effective, the success of any monotone algorithm is crucially dependant on good parameter initialisation, where a common choice is Kmeans initialisation, commonly employed for Gaussian mixture models. For other types of mixture models, the path to good initialisation parameters is often unclear and may require a problemspecific solution. To this end, we propose a general heuristic learning algorithm that utilises Boltzmann exploration to assign each observation to a specific base distribution within the mixture model, which we call Boltzmann exploration expectationmaximisation BEEM. With BEEM, hard assignments allow straight forward parameter learning for each base distribution by conditioning only on its assigned observations. Consequently, it can be applied to mixtures of any base distribution where single component parameter learning is tractable. The stochastic learning procedure is able to escape local optima and is thus insensitive to parameter initialisation. We show competitive performance on a number of synthetic benchmark cases as well as on realworld datasets.
A New Framework for Query Efficient Active Imitation Learning ; We seek to align agent policy with human expert behavior in a reinforcement learning RL setting, without any prior knowledge about dynamics, reward function, and unsafe states. There is a human expert knowing the rewards and unsafe states based on his preference and objective, but querying that human expert is expensive. To address this challenge, we propose a new framework for imitation learning IL algorithm that actively and interactively learns a model of the user's reward function with efficient queries. We build an adversarial generative model of states and a successor feature SR model trained over transition experience collected by learning policy. Our method uses these models to select stateaction pairs, asking the user to comment on the optimality or safety, and trains a adversarial neural network to predict the rewards. Different from previous papers, which are almost all based on uncertainty sampling, the key idea is to actively and efficiently select stateaction pairs from both onpolicy and offpolicy experience, by discriminating the queried expert and unqueried generated data and maximizing the efficiency of value function learning. We call this method adversarial reward query with successor representation. We evaluate the proposed method with simulated human on a statebased 2D navigation task, robotic control tasks and the imagebased video games, which have highdimensional observation and complex state dynamics. The results show that the proposed method significantly outperforms uncertaintybased methods on learning reward models, achieving better query efficiency, where the adversarial discriminator can make the agent learn human behavior more efficiently and the SR can select states which have stronger impact on value function. Moreover, the proposed method can also learn to avoid unsafe states when training the reward model.
Deep learning reveals hidden interactions in complex systems ; Rich phenomena from complex systems have long intrigued researchers, and yet modeling system microdynamics and inferring the forms of interaction remain challenging for conventional datadriven approaches, being generally established by human scientists. In this study, we propose AgentNet, a modelfree datadriven framework consisting of deep neural networks to reveal and analyze the hidden interactions in complex systems from observed data alone. AgentNet utilizes a graph attention network with novel variablewise attention to model the interaction between individual agents, and employs various encoders and decoders that can be selectively applied to any desired system. Our model successfully captured a wide variety of simulated complex systems, namely cellular automata discrete, the Vicsek model continuous, and active OrnsteinUhlenbeck particles nonMarkovian in which, notably, AgentNet's visualized attention values coincided with the true interaction strength and exhibited collective behavior that was absent in the training data. A demonstration with empirical data from a flock of birds showed that AgentNet could identify hidden interaction ranges exhibited by real birds, which cannot be detected by conventional velocity correlation analysis. We expect our framework to open a novel path to investigating complex systems and to provide insight into general processdriven modeling.
Generalized Linear Models for Longitudinal Data with Biased Sampling Designs A Sequential Offsetted Regressions Approach ; Biased sampling designs can be highly efficient when studying rare binary or low variability continuous endpoints. We consider longitudinal data settings in which the probability of being sampled depends on a repeatedly measured response through an outcomerelated, auxiliary variable. Such auxiliary variable or outcomedependent sampling improves observed response and possibly exposure variability over random sampling, even though the auxiliary variable is not of scientific interest. For analysis, we propose a generalized linear model based approach using a sequence of two offsetted regressions. The first estimates the relationship of the auxiliary variable to response and covariate data using an offsetted logistic regression model. The offset hinges on the assumed known ratio of sampling probabilities for different values of the auxiliary variable. Results from the auxiliary model are used to estimate observationspecific probabilities of being sampled conditional on the response and covariates, and these probabilities are then used to account for bias in the second, target population model. We provide asymptotic standard errors accounting for uncertainty in the estimation of the auxiliary model, and perform simulation studies demonstrating substantial bias reduction, correct coverage probability, and improved design efficiency over simple random sampling designs. We illustrate the approaches with two examples.
Elastic Consistency A General Consistency Model for Distributed Stochastic Gradient Descent ; Machine learning has made tremendous progress in recent years, with models matching or even surpassing humans on a series of specialized tasks. One key element behind the progress of machine learning in recent years has been the ability to train machine learning models in largescale distributed sharedmemory and messagepassing environments. Many of these models are trained employing variants of stochastic gradient descent SGD based optimization. In this paper, we introduce a general consistency condition covering communicationreduced and asynchronous distributed SGD implementations. Our framework, called elastic consistency enables us to derive convergence bounds for a variety of distributed SGD methods used in practice to train largescale machine learning models. The proposed framework declutters the implementationspecific convergence analysis and provides an abstraction to derive convergence bounds. We utilize the framework to analyze a sparsification scheme for distributed SGD methods in an asynchronous setting for convex and nonconvex objectives. We implement the distributed SGD variant to train deep CNN models in an asynchronous sharedmemory setting. Empirical results show that errorfeedback may not necessarily help in improving the convergence of sparsified asynchronous distributed SGD, which corroborates an insight suggested by our convergence analysis.
G2MFWA Geometric MultiModel Fitting with Weakly Annotated Data ; In this paper we attempt to address the problem of geometric multimodel fitting with resorting to a few weakly annotated WA data points, which has been sparsely studied so far. In weak annotating, most of the manual annotations are supposed to be correct yet inevitably mixed with incorrect ones. The WA data can be naturally obtained in an interactive way for specific tasks, for example, in the case of homography estimation, one can easily annotate points on the same planeobject with a single label by observing the image. Motivated by this, we propose a novel method to make full use of the WA data to boost the multimodel fitting performance. Specifically, a graph for model proposal sampling is first constructed using the WA data, given the prior that the WA data annotated with the same weak label has a high probability of being assigned to the same model. By incorporating this prior knowledge into the calculation of edge probabilities, vertices i.e., data points lie onnear the latent model are likely to connect together and further form a subsetcluster for effective proposals generation. With the proposals generated, the alphaexpansion is adopted for labeling, and our method in return updates the proposals. This works in an iterative way. Extensive experiments validate our method and show that the proposed method produces noticeably better results than stateoftheart techniques in most cases.
Adaptation of a deep learning malignancy model from fullfield digital mammography to digital breast tomosynthesis ; Mammographybased screening has helped reduce the breast cancer mortality rate, but has also been associated with potential harms due to low specificity, leading to unnecessary exams or procedures, and low sensitivity. Digital breast tomosynthesis DBT improves on conventional mammography by increasing both sensitivity and specificity and is becoming common in clinical settings. However, deep learning DL models have been developed mainly on conventional 2D fullfield digital mammography FFDM or scanned film images. Due to a lack of large annotated DBT datasets, it is difficult to train a model on DBT from scratch. In this work, we present methods to generalize a model trained on FFDM images to DBT images. In particular, we use average histogram matching HM and DL finetuning methods to generalize a FFDM model to the 2D maximum intensity projection MIP of DBT images. In the proposed approach, the differences between the FFDM and DBT domains are reduced via HM and then the base model, which was trained on abundant FFDM images, is finetuned. When evaluating on image patches extracted around identified findings, we are able to achieve similar areas under the receiver operating characteristic curve ROC AUC of sim 0.9 for FFDM and sim 0.85 for MIP images, as compared to a ROC AUC of sim 0.75 when tested directly on MIP images.
Stability analysis and constraints on interacting viscous cosmology ; In this work we study the evolution of a spatially flat Universe by considering a viscous dark matter and perfect fluids for dark energy and radiation, including an interaction term between dark matter and dark energy. In the first part, we analyse the general properties of the Universe by performing a stability analysis and then we constrain the free parameters of the model using the latest and cosmologicalindependent measurements of the Hubble parameter. We find consistency between the viscosity coefficient and the condition imposed by the second law of the Thermodynamics. The second part is dedicated to constrain the free parameter of the interacting viscous model IVM for three particular cases the viscous model VM, interacting model IM, and the perfect fluid case the concordance model. We report the deceleration parameter to be q0 0.540.060.05, 0.580.050.04, 0.580.050.05, 0.630.020.02, together with the jerk parameter as j0 0.870.060.09, 0.940.040.06, 0.910.060.10, 1.0 for the IVM, VM, IM, and LCDM respectively, where the uncertainties correspond at 68 CL. Worth mentioning that all the particular cases are in good agreement with LCDM, in some cases producing even better fits, with the advantage of eliminating some problems that afflicts the standard cosmological model.
Global Sensitivity Analysis on Numerical Solver Parameters of ParticleInCell Models in Particle Accelerator Systems ; Every computer model depends on numerical input parameters that are chosen according to mostly conservative but rigorous numerical or empirical estimates. These parameters could for example be the step size for time integrators, a seed for pseudorandom number generators, a threshold or the number of grid points to discretize a computational domain. In case a numerical model is enhanced with new algorithms and modelling techniques the numerical influence on the quantities of interest, the running time as well as the accuracy is often initially unknown. Usually parameters are chosen on a trialanderror basis neglecting the computational cost versus accuracy aspects. As a consequence the cost per simulation might be unnecessarily high which wastes computing resources. Hence, it is essential to identify the most critical numerical parameters and to analyze systematically their effect on the result in order to minimize the timetosolution without losing significantly on accuracy. Relevant parameters are identified by global sensitivity studies where Sobol' indices are common measures. These sensitivities are obtained by surrogate models based on polynomial chaos expansion. In this paper, we first introduce the general methods for uncertainty quantification. We then demonstrate their use on numerical solver parameters to reduce the computational costs and discuss further model improvements based on the sensitivity analysis. The sensitivities are evaluated for neighbouring bunch simulations of the existing PSI Injector II and PSI Ring as well as the proposed Daedalus Injector cyclotron and simulations of the rf electron gun of the Argonne Wakefield Accelerator.
A faithful analytical effective one body waveform model for spinaligned, moderately eccentric, coalescing black hole binaries ; We present a new effectiveonebody EOB model for eccentric binary coalescences. The model stems from the stateoftheart model TEOBResumSSM for circularized coalescing blackhole binaries, that is modified to explicitly incorporate eccentricity effects both in the radiation reaction and in the waveform. Using ReggeWheelerZerilli type calculations of the gravitational wave losses as benchmarks, we find that a rather accurate sim 1 expression for the radiation reaction along mildly eccentric orbits e sim 0.3 is given by dressing the current, EOBresummed, circularized angular momentum flux, with a leadingorder Newtonianlike prefactor valid along general orbits. An analogous approach is implemented for the waveform multipoles. The model is then completed by the usual mergerringdown part informed by circularized numerical relativity NR simulations. The model is validated against the 22, publicly available, NR simulations calculated by the Simulating eXtreme Spacetime SXS collaboration, with mild eccentricities, mass ratios between 1 and 3 and up to rather large dimensionless spin values pm 0.7. The maximum maximum EOBNR unfaithfulness, calculated with Advanced LIGO noise, is at most of order 3. The analytical framework presented here should be seen as a promising starting point for developing highlyfaithful waveform templates driven by eccentric dynamics for present, and possibly future, gravitational wave detectors.
WeaklySupervised Action Localization with ExpectationMaximization MultiInstance Learning ; Weaklysupervised action localization requires training a model to localize the action segments in the video given only video level action label. It can be solved under the Multiple Instance Learning MIL framework, where a bag video contains multiple instances action segments. Since only the bag's label is known, the main challenge is assigning which key instances within the bag to trigger the bag's label. Most previous models use attentionbased approaches applying attentions to generate the bag's representation from instances, and then train it via the bag's classification. These models, however, implicitly violate the MIL assumption that instances in negative bags should be uniformly negative. In this work, we explicitly model the key instances assignment as a hidden variable and adopt an ExpectationMaximization EM framework. We derive two pseudolabel generation schemes to model the E and M process and iteratively optimize the likelihood lower bound. We show that our EMMIL approach more accurately models both the learning objective and the MIL assumptions. It achieves stateoftheart performance on two standard benchmarks, THUMOS14 and ActivityNet1.2.
EndToEnd Convolutional Neural Network for 3D Reconstruction of Knee Bones From BiPlanar XRay Images ; We present an endtoend Convolutional Neural Network CNN approach for 3D reconstruction of knee bones directly from two biplanar Xray images. Clinically, capturing the 3D models of the bones is crucial for surgical planning, implant fitting, and postoperative evaluation. Xray imaging significantly reduces the exposure of patients to ionizing radiation compared to Computer Tomography CT imaging, and is much more common and inexpensive compared to Magnetic Resonance Imaging MRI scanners. However, retrieving 3D models from such 2D scans is extremely challenging. In contrast to the common approach of statistically modeling the shape of each bone, our deep network learns the distribution of the bones' shapes directly from the training images. We train our model with both supervised and unsupervised losses using Digitally Reconstructed Radiograph DRR images generated from CT scans. To apply our model to XRay data, we use style transfer to transform between XRay and DRR modalities. As a result, at test time, without further optimization, our solution directly outputs a 3D reconstruction from a pair of biplanar Xray images, while preserving geometric constraints. Our results indicate that our deep learning model is very efficient, generalizes well and produces high quality reconstructions.
Band topology, Hubbard model, Heisenberg model, and DzyaloshinskiiMoriya interaction in twisted bilayer WSe2 ; We present a theoretical study of singleparticle and manybody properties of twisted bilayer WSe2. For singleparticle physics, we calculate the band topological phase diagram and electron local density of states LDOS, which are found to be correlated. By comparing our theoretical LDOS with those measured by scanning tunneling microscopy, we comment on the possible topological nature of the first moir'e valence band. For manybody physics, we construct a generalized Hubbard model on a triangular lattice based on the calculated singleparticle moir'e bands. We show that a layer potential difference, arising, for example, from an applied electric field, can drastically change the noninteracting moir'e bands, tune the spinorbit coupling in the Hubbard model, control the charge excitation gap of the Mott insulator at half filling, and generate an effective DzyaloshinskiiMoriya interaction in the effective Heisenberg model for the Mott insulator. Our theoretical results agree with transport experiments on the same system in several key aspects, and establish twisted bilayer WSe2 as a highly tunable system for studying and simulating strongly correlated phenomena in the Hubbard model.
Classifying Constructive Comments ; We introduce the Constructive Comments Corpus C3, comprised of 12,000 annotated news comments, intended to help build new tools for online communities to improve the quality of their discussions. We define constructive comments as highquality comments that make a contribution to the conversation. We explain the crowd worker annotation scheme and define a taxonomy of subcharacteristics of constructiveness. The quality of the annotation scheme and the resulting dataset is evaluated using measurements of interannotator agreement, expert assessment of a sample, and by the constructiveness subcharacteristics, which we show provide a proxy for the general constructiveness concept. We provide models for constructiveness trained on C3 using both featurebased and a variety of deep learning approaches and demonstrate that these models capture general rather than topic or domainspecific characteristics of constructiveness, through domain adaptation experiments. We examine the role that length plays in our models, as comment length could be easily gamed if models depend heavily upon this feature. By examining the errors made by each model and their distribution by length, we show that the best performing models are less correlated with comment length.The constructiveness corpus and our experiments pave the way for a moderation tool focused on promoting comments that make a contribution, rather than only filtering out undesirable content.
Learning Furniture Compatibility with Graph Neural Networks ; We propose a graph neural network GNN approach to the problem of predicting the stylistic compatibility of a set of furniture items from images. While most existing results are based on siamese networks which evaluate pairwise compatibility between items, the proposed GNN architecture exploits relational information among groups of items. We present two GNN models, both of which comprise a deep CNN that extracts a feature representation for each image, a gated recurrent unit GRU network that models interactions between the furniture items in a set, and an aggregation function that calculates the compatibility score. In the first model, a generalized contrastive loss function that promotes the generation of clustered embeddings for items belonging to the same furniture set is introduced. Also, in the first model, the edge function between nodes in the GRU and the aggregation function are fixed in order to limit model complexity and allow training on smaller datasets; in the second model, the edge function and aggregation function are learned directly from the data. We demonstrate stateofthe art accuracy for compatibility prediction and fill in the blank tasks on the Bonn and Singapore furniture datasets. We further introduce a new dataset, called the Target Furniture Collections dataset, which contains over 6000 furniture items that have been handcurated by stylists to make up 1632 compatible sets. We also demonstrate superior prediction accuracy on this dataset.
Deep Generation of Coq Lemma Names Using Elaborated Terms ; Coding conventions for naming, spacing, and other essentially stylistic properties are necessary for developers to effectively understand, review, and modify source code in large software projects. Consistent conventions in verification projects based on proof assistants, such as Coq, increase in importance as projects grow in size and scope. While conventions can be documented and enforced manually at high cost, emerging approaches automatically learn and suggest idiomatic names in Javalike languages by applying statistical language models on large code corpora. However, due to its powerful language extension facilities and fusion of type checking and computation, Coq is a challenging target for automated learning techniques. We present novel generation models for learning and suggesting lemma names for Coq projects. Our models, based on multiinput neural networks, are the first to leverage syntactic and semantic information from Coq's lexer tokens in lemma statements, parser syntax trees, and kernel elaborated terms for naming; the key insight is that learning from elaborated terms can substantially boost model performance. We implemented our models in a toolchain, dubbed Roosterize, and applied it on a large corpus of code derived from the Mathematical Components family of projects, known for its stringent coding conventions. Our results show that Roosterize substantially outperforms baselines for suggesting lemma names, highlighting the importance of using multiinput models and elaborated terms.
Neural Additive Models Interpretable Machine Learning with Neural Nets ; Deep neural networks DNNs are powerful blackbox predictors that have achieved impressive performance on a wide variety of tasks. However, their accuracy comes at the cost of intelligibility it is usually unclear how they make their decisions. This hinders their applicability to high stakes decisionmaking domains such as healthcare. We propose Neural Additive Models NAMs which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing stateoftheart generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees. To demonstrate this, we show how NAMs can be used for multitask learning on synthetic data and on the COMPAS recidivism data due to their composability, and demonstrate that the differentiability of NAMs allows them to train more complex interpretable models for COVID19.
Selecting Informative Contexts Improves Language Model Finetuning ; Language model finetuning is essential for modern natural language processing, but is computationally expensive and timeconsuming. Further, the effectiveness of finetuning is limited by the inclusion of training examples that negatively affect performance. Here we present a general finetuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model finetuning. We define the information gain of an example as the improvement on a test metric after training on that example. A secondary learner is then trained to approximate this quantity. During finetuning, this learner selects informative examples and skips uninformative ones. We show that our method has consistent improvement across datasets, finetuning tasks, and language model architectures. For example, we achieve a median perplexity of 54.0 on a books dataset compared to 57.3 for standard finetuning. We present statistical evidence that offers insight into the improvements of our method over standard finetuning. The generality of our method leads us to propose a new paradigm for language model finetuning we encourage researchers to release pretrained secondary learners on common corpora to promote efficient and effective finetuning, thereby improving the performance and reducing the overall energy footprint of language model finetuning.
Ensemble Forecasting for Intraday Electricity Prices Simulating Trajectories ; Recent studies concerning the point electricity price forecasting have shown evidence that the hourly German Intraday Continuous Market is weakform efficient. Therefore, we take a novel, advanced approach to the problem. A probabilistic forecasting of the hourly intraday electricity prices is performed by simulating trajectories in every trading window to receive a realistic ensemble to allow for more efficient intraday trading and redispatch. A generalized additive model is fitted to the price differences with the assumption that they follow a zeroinflated distribution, precisely a mixture of the Dirac and the Student's tdistributions. Moreover, the mixing term is estimated using a highdimensional logistic regression with lasso penalty. We model the expected value and volatility of the series using i.a. autoregressive and notrade effects or load, wind and solar generation forecasts and accounting for the nonlinearities in e.g. time to maturity. Both the insample characteristics and forecasting performance are analysed using a rolling window forecasting study. Multiple versions of the model are compared to several benchmark models and evaluated using probabilistic forecasting measures and significance tests. The study aims to forecast the price distribution in the German Intraday Continuous Market in the last 3 hours of trading, but the approach allows for application to other continuous markets, especially in Europe. The results prove superiority of the mixture model over the benchmarks gaining the most from the modelling of the volatility. They also indicate that the introduction of XBID reduced the market volatility.
Modeling Urban Growth and Form with Spatial Entropy ; Entropy is one of physical bases for fractal dimension definition, and the generalized fractal dimension was defined by Renyi entropy. Using fractal dimension, we can describe urban growth and form and characterize spatial complexity. A number of fractal models and measurements have been proposed for urban studies. However, the precondition for fractal dimension application is to find scaling relations in cities. In absence of scaling property, we can make use of entropy function and measurements. This paper is devoted to researching how to describe urban growth by using spatial entropy. By analogy with fractal dimension growth models of cities, a pair of entropy increase models can be derived and a set of entropybased measurements can be constructed to describe urban growing process and patterns. First, logistic function and Boltzmann equation are utilized to model the entropy increase curves of urban growth. Second, a series of indexes based on spatial entropy are used to characterize urban form. Further, multifractal dimension spectrums are generalized to spatial entropy spectrums. Conclusions are drawn as follows. Entropy and fractal dimension have both intersection and different spheres of application to urban research. Thus, for a given spatial measurement scale, fractal dimension can often be replaced by spatial entropy for simplicity. The models and measurements presented in this work are significant for integrating entropy and fractal dimension into the same framework of urban spatial analysis and understanding spatial complexity of cities.
Online Scheduling of a Residential Microgrid via MonteCarlo Tree Search and a Learned Model ; The uncertainty of distributed renewable energy brings significant challenges to economic operation of microgrids. Conventional online optimization approaches require a forecast model. However, accurately forecasting the renewable power generations is still a tough task. To achieve online scheduling of a residential microgrid RM that does not need a forecast model to predict the future PVwind and load power sequences, this paper investigates the usage of reinforcement learning RL approach to tackle this challenge. Specifically, based on the recent development of ModelBased Reinforcement Learning, MuZero, we investigate its application to the RM scheduling problem. To accommodate the characteristics of the RM scheduling application, a optimization framework that combines the modelbased RL agent with the mathematical optimization technique is designed, and long shortterm memory LSTM units are adopted to extract features from the past renewable generation and load sequences. At each time step, the optimal decision is obtained by conducting MonteCarlo tree search MCTS with a learned model and solving an optimal power flow subproblem. In this way, this approach can sequentially make operational decisions online without relying on a forecast model. The numerical simulation results demonstrate the effectiveness of the proposed algorithm.
How to Close SimReal Gap Transfer with Segmentation ; One fundamental difficulty in robotic learning is the simreal gap problem. In this work, we propose to use segmentation as the interface between perception and control, as a domaininvariant state representation. We identify two sources of simreal gap, one is dynamics simreal gap, the other is visual simreal gap. To close dynamics simreal gap, we propose to use closedloop control. For complex task with segmentation mask input, we further propose to learn a closedloop modelfree control policy with deep neural network using imitation learning. To close visual simreal gap, we propose to learn a perception model in real environment using simulated target plus real background image, without using any real world supervision. We demonstrate this methodology in eyeinhand grasping task. We train a closedloop control policy model that taking the segmentation as input using simulation. We show that this control policy is able to transfer from simulation to real environment. The closedloop control policy is not only robust with respect to discrepancies between the dynamic model of the simulated and real robot, but also is able to generalize to unseen scenarios where the target is moving and even learns to recover from failures. We train the perception segmentation model using training data generated by composing real background images with simulated images of the target. Combining the control policy learned from simulation with the perception model, we achieve an impressive bf88 success rate in grasping a tiny sphere with a real robot.
Markov Chains for Horizons MARCH. I. Identifying Biases in Fitting Theoretical Models to Event Horizon Telescope Observations ; We introduce a new Markov Chain Monte Carlo MCMC algorithm with parallel tempering for fitting theoretical models of horizonscale images of black holes to the interferometric data from the Event Horizon Telescope EHT. The algorithm implements forms of the noise distribution in the data that are accurate for all signaltonoise ratios. In addition to being trivially parallelizable, the algorithm is optimized for high performance, achieving 1 million MCMC chain steps in under 20 seconds on a single processor. We use synthetic data for the 2017 EHT coverage of M87 that are generated based on analytic as well as General Relativistic Magnetohydrodynamic GRMHD model images to explore several potential sources of biases in fitting models to sparse interferometric data. We demonstrate that a very small number of data points that lie near salient features of the interferometric data exert disproportionate influence on the inferred model parameters. We also show that the preferred orientations of the EHT baselines introduce significant biases in the inference of the orientation of the model images. Finally, we discuss strategies that help identify the presence and severity of such biases in realistic applications.
Modeling the propagation of riots, collective behaviors, and epidemics ; This paper is concerned with a family of ReactionDiffusion systems that we introduced in 15, and that generalizes the SIR type models from epidemiology. Such systems are now also used to describe collective behaviors.In this paper, we propose a modeling approach for these apparently diverse phenomena through the example of the dynamics of social unrest. The model involves two quantities the level of social unrest, or more generally activity, u, and a field of social tension v, which play asymmetric roles. We think of u as the actually observed or explicit quantity while v is an ambiant, sometimes implicit, field of susceptibility that modulates the dynamics of u. In this article, we explore this class of model and prove several theoretical results based on the framework developed in 15, of which the present work is a companion paper. We particularly emphasize here two subclasses of systems tension inhibiting and tension enhancing. These are characterized by respectively a negative or a positivefeedback of the unrest on social tension. We establish several properties for these classes and also study some extensions. In particular, we describe the behavior of the system following an initial surge of activity. We show that the model can give rise to many diverse qualitative dynamics. We also provide a variety of numerical simulations to illustrate our results and to reveal further properties and open questions.
Sparse Network Optimization for Synchronization ; We propose new mathematical optimization models for generating sparse dynamical graphs, or networks, that can achieve synchronization. The synchronization phenomenon is studied using the Kuramoto model, defined in terms of the adjacency matrix of the graph and the coupling strength of the network, modelling the socalled coupled oscillators. Besides sparsity, we aim to obtain graphs which have good connectivity properties, resulting in small coupling strength for synchronization. We formulate three mathematical optimization models for this purpose. Our first model is a mixed integer optimization problem, subject to ODE constraints, reminiscent of an optimal control problem. As expected, this problem is computationally very challenging, if not impossible, to solve, not only because it involves binary variables but also some of its variables are functions. The second model is a continuous relaxation of the first one, and the third is a discretization of the second, which is computationally tractable by employing standard optimization software. We design dynamical graphs that synchronize, by solving the relaxed problem and applying a practical algorithm for various graph sizes, with randomly generated intrinsic natural frequencies and initial phase variables. We test robustness of these graphs by carrying out numerical simulations with random data and constructing the expected value of the network's order parameter and its variance under this random data, as a guide for assessment.
The role of multiple repetitions on the size of a rumor ; We propose a mathematical model to measure how multiple repetitions may influence in the ultimate proportion of the population never hearing a rumor during a given outbreak. The model is a multidimensional continuoustime Markov chain that can be seen as a generalization of the MakiThompson model for the propagation of a rumor within a homogeneously mixing population. In the wellknown basic model, the population is made up of spreaders, ignorants and stiflers, and any spreader attempts to transmit the rumor to the other individuals via directed contacts. In case the contacted individual is an ignorant, it becomes a spreader, while in the other two cases the initiating spreader turns into a stifler. The process in a finite population will eventually reach an equilibrium situation, where individuals are either stiflers or ignorants. We generalize the model by assuming that each ignorant becomes a spreader only after hearing the rumor a predetermined number of times. We identify and analyze a suitable limiting dynamical system of the model, and we prove limit theorems that characterize the ultimate proportion of individuals in the different classes of the population.
IntraProcessing Methods for Debiasing Neural Networks ; As deep learning models become tasked with more and more decisions that impact human lives, such as criminal recidivism, loan repayment, and face recognition for law enforcement, bias is becoming a growing concern. Debiasing algorithms are typically split into three paradigms preprocessing, inprocessing, and postprocessing. However, in computer vision or natural language applications, it is common to start with a large generic model and then finetune to a specific usecase. Pre or inprocessing methods would require retraining the entire model from scratch, while postprocessing methods only have blackbox access to the model, so they do not leverage the weights of the trained model. Creating debiasing algorithms specifically for this finetuning usecase has largely been neglected. In this work, we initiate the study of a new paradigm in debiasing research, intraprocessing, which sits between inprocessing and postprocessing methods. Intraprocessing methods are designed specifically to debias large models which have been trained on a generic dataset and finetuned on a more specific task. We show how to repurpose existing inprocessing methods for this usecase, and we also propose three baseline algorithms random perturbation, layerwise optimization, and adversarial finetuning. All of our techniques can be used for all popular group fairness measures such as equalized odds or statistical parity difference. We evaluate these methods across three popular datasets from the AIF360 toolkit, as well as on the CelebA faces dataset. Our code is available at httpsgithub.comabacusaiintraprocessingdebiasing.
Salienteye Maximizing Engagement While Maintaining Artistic Style on Instagram Using Deep Neural Networks ; Instagram has become a great venue for amateur and professional photographers alike to showcase their work. It has, in other words, democratized photography. Generally, photographers take thousands of photos in a session, from which they pick a few to showcase their work on Instagram. Photographers trying to build a reputation on Instagram have to strike a balance between maximizing their followers' engagement with their photos, while also maintaining their artistic style. We used transfer learning to adapt Xception, which is a model for object recognition trained on the ImageNet dataset, to the task of engagement prediction and utilized Gram matrices generated from VGG19, another object recognition model trained on ImageNet, for the task of style similarity measurement on photos posted on Instagram. Our models can be trained on individual Instagram accounts to create personalized engagement prediction and style similarity models. Once trained on their accounts, users can have new photos sorted based on predicted engagement and style similarity to their previous work, thus enabling them to upload photos that not only have the potential to maximize engagement from their followers but also maintain their style of photography. We trained and validated our models on several Instagram accounts, showing it to be adept at both tasks, also outperforming several baseline models and human annotators.
Feature Extraction for Novelty Detection in Network Traffic ; Data representation plays a critical role in the performance of novelty detection or anomaly detection'' methods in machine learning. The data representation of network traffic often determines the effectiveness of these models as much as the model itself. The wide range of novel events that network operators need to detect e.g., attacks, malware, new applications, changes in traffic demands introduces the possibility for a broad range of possible models and data representations. In each scenario, practitioners must spend significant effort extracting and engineering features that are most predictive for that situation or application. While anomaly detection is wellstudied in computer networking, much existing work develops specific models that presume a particular representation often IPFIXNetFlow. Yet, other representations may result in higher model accuracy, and the rise of programmable networks now makes it more practical to explore a broader range of representations. To facilitate such exploration, we develop a systematic framework, opensource toolkit, and public Python library that makes it both possible and easy to extract and generate features from network traffic and perform and endtoend evaluation of these representations across most prevalent modern novelty detection models. We first develop and publicly release an opensource tool, an accompanying Python library NetML, and endtoend pipeline for novelty detection in network traffic. Second, we apply this tool to five different novelty detection problems in networking, across a range of scenarios from attack detection to novel device detection. Our findings general insights and guidelines concerning which features appear to be more appropriate for particular situations.
Knowledge Distillation Beyond Model Compression ; Knowledge distillation KD is commonly deemed as an effective model compression technique in which a compact model student is trained under the supervision of a larger pretrained model or an ensemble of models teacher. Various techniques have been proposed since the original formulation, which mimic different aspects of the teacher such as the representation space, decision boundary, or intradata relationship. Some methods replace the oneway knowledge distillation from a static teacher with collaborative learning between a cohort of students. Despite the recent advances, a clear understanding of where knowledge resides in a deep neural network and an optimal method for capturing knowledge from teacher and transferring it to student remains an open question. In this study, we provide an extensive study on nine different KD methods which covers a broad spectrum of approaches to capture and transfer knowledge. We demonstrate the versatility of the KD framework on different datasets and network architectures under varying capacity gaps between the teacher and student. The study provides intuition for the effects of mimicking different aspects of the teacher and derives insights from the performance of the different distillation approaches to guide the design of more effective KD methods. Furthermore, our study shows the effectiveness of the KD framework in learning efficiently under varying severity levels of label noise and class imbalance, consistently providing generalization gains over standard training. We emphasize that the efficacy of KD goes much beyond a model compression technique and it should be considered as a generalpurpose training paradigm which offers more robustness to common challenges in the realworld datasets compared to the standard training procedure.
Deep Contextual Embeddings for Address Classification in Ecommerce ; Ecommerce customers in developing nations like India tend to follow no fixed format while entering shipping addresses. Parsing such addresses is challenging because of a lack of inherent structure or hierarchy. It is imperative to understand the language of addresses, so that shipments can be routed without delays. In this paper, we propose a novel approach towards understanding customer addresses by deriving motivation from recent advances in Natural Language Processing NLP. We also formulate different preprocessing steps for addresses using a combination of edit distance and phonetic algorithms. Then we approach the task of creating vector representations for addresses using Word2Vec with TFIDF, BiLSTM and BERT based approaches. We compare these approaches with respect to subregion classification task for North and South Indian cities. Through experiments, we demonstrate the effectiveness of generalized RoBERTa model, pretrained over a large address corpus for language modelling task. Our proposed RoBERTa model achieves a classification accuracy of around 90 with minimal text preprocessing for subregion classification task outperforming all other approaches. Once pretrained, the RoBERTa model can be finetuned for various downstream tasks in supply chain like pincode suggestion and geocoding. The model generalizes well for such tasks even with limited labelled data. To the best of our knowledge, this is the first of its kind research proposing a novel approach of understanding customer addresses in ecommerce domain by pretraining language models and finetuning them for different purposes.
A Computationally Tractable Framework for Nonlinear Dynamic Multiscale Modeling of Membrane Fabric ; A generalpurpose computational homogenization framework is proposed for the nonlinear dynamic analysis of membranes exhibiting complex microscale andor mesoscale heterogeneity characterized by inplane periodicity that cannot be effectively treated by a conventional method, such as woven fabrics. The framework is a generalization of the finite element squared or FE2 method in which a localized portion of the periodic subscale structure is modeled using finite elements. The numerical solution of displacement driven problems involving this model can be adapted to the context of membranes by a variant of the KlinkelGovindjee method1 originally proposed for using finite strain, threedimensional material models in beam and shell elements. This approach relies on numerical enforcement of the plane stress constraint and is enabled by the principle of frame invariance. Computational tractability is achieved by introducing a regressionbased surrogate model informed by a physicsinspired training regimen in which FE2 is utilized to simulate a variety of numerical experiments including uniaxial, biaxial and shear straining of a material coupon. Several alternative surrogate models are evaluated including an artificial neural network. The framework is demonstrated and validated for a realistic Mars landing application involving supersonic inflation of a parachute canopy made of woven fabric.
Differentiable Programming for Hyperspectral Unmixing using a Physicsbased Dispersion Model ; Hyperspectral unmixing is an important remote sensing task with applications including material identification and analysis. Characteristic spectral features make many pure materials identifiable from their visibletoinfrared spectra, but quantifying their presence within a mixture is a challenging task due to nonlinearities and factors of variation. In this paper, spectral variation is considered from a physicsbased approach and incorporated into an endtoend spectral unmixing algorithm via differentiable programming. The dispersion model is introduced to simulate realistic spectral variation, and an efficient method to fit the parameters is presented. Then, this dispersion model is utilized as a generative model within an analysisbysynthesis spectral unmixing algorithm. Further, a technique for inverse rendering using a convolutional neural network to predict parameters of the generative model is introduced to enhance performance and speed when training data is available. Results achieve stateoftheart on both infrared and visibletonearinfrared VNIR datasets, and show promise for the synergy between physicsbased models and deep learning in hyperspectral unmixing in the future.
RGBIR Crossmodality Person ReID based on TeacherStudent GAN Model ; RGBInfrared RGBIR person reidentification ReID is a technology where the system can automatically identify the same person appearing at different parts of a video when light is unavailable. The critical challenge of this task is the crossmodality gap of features under different modalities. To solve this challenge, we proposed a TeacherStudent GAN model TSGAN to adopt different domains and guide the ReID backbone to learn better ReID information. 1 In order to get corresponding RGBIR image pairs, the RGBIR Generative Adversarial Network GAN was used to generate IR images. 2 To kickstart the training of identities, a ReID Teacher module was trained under IR modality person images, which is then used to guide its Student counterpart in training. 3 Likewise, to better adapt different domain features and enhance model ReID performance, three TeacherStudent loss functions were used. Unlike other GAN based models, the proposed model only needs the backbone module at the test stage, making it more efficient and resourcesaving. To showcase our model's capability, we did extensive experiments on the newlyreleased SYSUMM01 RGBIR ReID benchmark and achieved superior performance to the stateoftheart with 49.8 Rank1 and 47.4 mAP.
A parsimonious model for spatial transmission and heterogeneity in the COVID19 propagation ; Raw data on the cumulative number of deaths at a country level generally indicate a spatially variable distribution of the incidence of COVID19 disease. An important issue is to determine whether this spatial pattern is a consequence of environmental heterogeneities, such as the climatic conditions, during the course of the outbreak. Another fundamental issue is to understand the spatial spreading of COVID19. To address these questions, we consider four candidate epidemiological models with varying complexity in terms of initial conditions, contact rates and nonlocal transmissions, and we fit them to French mortality data with a mixed probabilisticODE approach. Using standard statistical criteria, we select the model with nonlocal transmission corresponding to a diffusion on the graph of counties that depends on the geographic proximity, with timedependent contact rate and spatially constant parameters. This original spatially parsimonious model suggests that in a geographically middle size centralized country such as France, once the epidemic is established, the effect of global processes such as restriction policies, sanitary measures and social distancing overwhelms the effect of local factors. Additionally, this modeling approach reveals the latent epidemiological dynamics including the local level of immunity, and allows us to evaluate the role of nonlocal interactions on the future spread of the disease. In view of its theoretical and numerical simplicity and its ability to accurately track the COVID19 epidemic curves, the framework we develop here, in particular the nonlocal model and the associated estimation procedure, is of general interest in studying spatial dynamics of epidemics.
Impact of Medical Data Imprecision on Learning Results ; Test data measured by medical instruments often carry imprecise ranges that include the true values. The latter are not obtainable in virtually all cases. Most learning algorithms, however, carry out arithmetical calculations that are subject to uncertain influence in both the learning process to obtain models and applications of the learned models in, e.g. prediction. In this paper, we initiate a study on the impact of imprecision on prediction results in a healthcare application where a pretrained model is used to predict future state of hyperthyroidism for patients. We formulate a model for data imprecisions. Using parameters to control the degree of imprecision, imprecise samples for comparison experiments can be generated using this model. Further, a group of measures are defined to evaluate the different impacts quantitatively. More specifically, the statistics to measure the inconsistent prediction for individual patients are defined. We perform experimental evaluations to compare prediction results based on the data from the original dataset and the corresponding ones generated from the proposed precision model using the longshortterm memories LSTM network. The results against a real world hyperthyroidism dataset provide insights into how small imprecisions can cause large ranges of predicted results, which could cause mislabeling and inappropriate actions treatments or no treatments for individual patients.
SummEval Reevaluating Summarization Evaluation ; The scarcity of comprehensive uptodate studies on evaluation metrics for text summarization and the lack of consensus regarding evaluation protocols continue to inhibit progress. We address the existing shortcomings of summarization evaluation methods along five dimensions 1 we reevaluate 14 automatic evaluation metrics in a comprehensive and consistent fashion using neural summarization model outputs along with expert and crowdsourced human annotations, 2 we consistently benchmark 23 recent summarization models using the aforementioned automatic evaluation metrics, 3 we assemble the largest collection of summaries generated by models trained on the CNNDailyMail news dataset and share it in a unified format, 4 we implement and share a toolkit that provides an extensible and unified API for evaluating summarization models across a broad range of automatic metrics, 5 we assemble and share the largest and most diverse, in terms of model types, collection of human judgments of modelgenerated summaries on the CNNDaily Mail dataset annotated by both expert judges and crowdsource workers. We hope that this work will help promote a more complete evaluation protocol for text summarization as well as advance research in developing evaluation metrics that better correlate with human judgments.
Mix Dimension in Poincare Geometry for 3D Skeletonbased Action Recognition ; Graph Convolutional Networks GCNs have already demonstrated their powerful ability to model the irregular data, e.g., skeletal data in human action recognition, providing an exciting new way to fuse rich structural information for nodes residing in different parts of a graph. In human action recognition, current works introduce a dynamic graph generation mechanism to better capture the underlying semantic skeleton connections and thus improves the performance. In this paper, we provide an orthogonal way to explore the underlying connections. Instead of introducing an expensive dynamic graph generation paradigm, we build a more efficient GCN on a Riemann manifold, which we think is a more suitable space to model the graph data, to make the extracted representations fit the embedding matrix. Specifically, we present a novel spatialtemporal GCN STGCN architecture which is defined via the Poincar'e geometry such that it is able to better model the latent anatomy of the structure data. To further explore the optimal projection dimension in the Riemann space, we mix different dimensions on the manifold and provide an efficient way to explore the dimension for each STGCN layer. With the final resulted architecture, we evaluate our method on two current largest scale 3D datasets, i.e., NTU RGBD and NTU RGBD 120. The comparison results show that the model could achieve a superior performance under any given evaluation metrics with only 40 model size when compared with the previous best GCN method, which proves the effectiveness of our model.
Impact of the reflection model on the estimate of the properties of accreting black holes ; Relativistic reflection features in the Xray spectra of black hole binaries and AGNs originate from illumination of the inner part of the accretion disk by a hot corona. In the presence of high quality data and with the correct astrophysical model, Xray reflection spectroscopy can be quite a powerful tool to probe the strong gravity region, study the morphology of the accreting matter, measure black hole spins, and even test Einstein's theory of general relativity in the strong field regime. There are a few relativistic reflection models available today and developed by different groups. All these models present some differences and have a number of simplifications introducing systematic uncertainties. The question is whether different models provide different measurements of the properties of black holes and how to arrive at a common model for the whole Xray astronomy community. In this paper, we start exploring this issue by analyzing a Suzaku observation of the stellarmass black hole in GRS 1915105 and simultaneous XMMNewton and NuSTAR observations of the supermassive black hole in MCG63015. The relativistic reflection component of these sources is fitted with RELCONVtimesREFLIONX, RELCONVtimesXILLVER, and RELXILL. We discuss the differences and the impact on the study of accreting black holes.
Pancakes as opposed to Swiss Cheese ; We examine a novel class of toy models of cosmological inhomogeneities by smoothly matching along a suitable hypersurface an arbitrary number of sections of quasi flat inhomogeous and anisotropic SzekeresII models to sections of any spatially flat cosmology that can be described by the RobertsonWaker metric including de Sitter, anti de Sitter and Minkowski spacetimes. The resulting pancake models are quasiflat analogues to the well known spherical Swisscheese models found in the literature. Since SzekeresII models can be, in general, compatible with a wide range of sources dissipative fluids, mixtures of noncomoving fluids, mixtures of fluids with scalar or magnetic fields or gravitational waves, the pancake configurations we present allow for a description of a wide collection of localized sources embedded in a RobertsonWaker geometry. We provide various simple examples of arbitrary numbers of SzekeresII regions whose sources are comoving dust and energy flux interpreted as a field of peculiar velocities matched with Einstein de Sitter, LambdaCDM and de Sitter backgrounds. We also prove that the SzekeresII regions can be rigorously regarded as exact perturbations on a background defined by the matching discussed above. We believe that these models can be useful to test ideas on averaging and backreaction and on the effect of inhomogeneities on cosmic evolution and observations.
Learning the Pareto Front with Hypernetworks ; Multiobjective optimization MOO problems are prevalent in machine learning. These problems have a set of optimal solutions, called the Pareto front, where each point on the front represents a different tradeoff between possibly conflicting objectives. Recent MOO methods can target a specific desired ray in loss space however, most approaches still face two grave limitations i A separate model has to be trained for each point on the front; and ii The exact tradeoff must be known before the optimization process. Here, we tackle the problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training. We call this new setup ParetoFront Learning PFL. We describe an approach to PFL implemented using HyperNetworks, which we term Pareto HyperNetworks PHNs. PHN learns the entire Pareto front simultaneously using a single hypernetwork, which receives as input a desired preference vector and returns a Paretooptimal model whose loss vector is in the desired ray. The unified model is runtime efficient compared to training multiple models and generalizes to new operating points not used during training. We evaluate our method on a wide set of problems, from multitask regression and classification to fairness. PHNs learn the entire Pareto front at roughly the same time as learning a single point on the front and at the same time reach a better solution set. Furthermore, we show that PHNs can scale to generate large models like ResNet18. PFL opens the door to new applications where models are selected based on preferences that are only available at run time.
GitEvolve Predicting the Evolution of GitHub Repositories ; Software development is becoming increasingly open and collaborative with the advent of platforms such as GitHub. Given its crucial role, there is a need to better understand and model the dynamics of GitHub as a social platform. Previous work has mostly considered the dynamics of traditional social networking sites like Twitter and Facebook. We propose GitEvolve, a system to predict the evolution of GitHub repositories and the different ways by which users interact with them. To this end, we develop an endtoend multitask sequential deep neural network that given some seed events, simultaneously predicts which usergroup is next going to interact with a given repository, what the type of the interaction is, and when it happens. To facilitate learning, we use graph based representation learning to encode relationship between repositories. We map users to groups by modelling common interests to better predict popularity and to generalize to unseen users during inference. We introduce an artificial event type to better model varying levels of activity of repositories in the dataset. The proposed multitask architecture is generic and can be extended to model information diffusion in other social networks. In a series of experiments, we demonstrate the effectiveness of the proposed model, using multiple metrics and baselines. Qualitative analysis of the model's ability to predict popularity and forecast trends proves its applicability.
Initialization effects of nucleon profile on the yields in heavyion collisions at medium energies ; We study a problem of pi production in heavy ion collisions in the context of the Isospindependent BoltzmannUehlingUhlenbeck IBUU transport model. We generated nucleon densities using two different models, the SkyrmeHartreeFock SHF model and configuration interaction shell model SM. Indeed, internucleon correlations are explicitly taken into account in SM, while they are averaged in the SHF model. As an application of our theoretical frameworks, we calculated the pi and pi yields in collisions of nuclei with A 3040 nucleons. We used different harmonic oscillator lengths bHO to generate the harmonic oscillator basis for SM in order to study both theoretical and experimental cases. It is found that SM framework with bHO 2.5 fm and SHF can be distinguished by the yield of pi mesons, in this case the density distribution calculated by the shell model produces more pi in the collision. In comparison, SM with bHO 2.0 fm is characterized from SHF by the double pipi ratios with different large impact parameters, from which one can find the double pipi ratios of SM change smoother and are less than those of SHF.
Modelling Type Ic Supernovae with TARDIS Hidden Helium in SN1994I ; Supernovae SNe with photospheric spectra devoid of Hydrogen and Helium features are generally classified as Type Ic SNe SNe Ic. However, there is ongoing debate as to whether Helium can be hidden in the ejecta of SNe Ic that is, Helium is present in the ejecta, but produces no obvious features in the spectra. We present the first application of the fast, 1D radiative transfer code TARDIS to a SN Ic, and we investigate the question of how much Helium can be hidden in the outer layers of the SN Ic ejecta. We generate TARDIS models for the nearby, wellobserved, and extensively modeled SN Ic 1994I, and we perform a code comparison to a different, wellestablished Monte Carlo based radiation transfer code. The code comparison shows that TARDIS produces consistent synthetic spectra for identical ejecta models of SN1994I. In addition, we perform a systematic experiment of adding outer He shells of varying masses to our SN1994I models. We find that an outer He shell of only 0.05Modot produces strong optical and NIR He spectral features for SN1994I which are not present in observations, thus indicating that the SN1994I ejecta is almost fully He deficient compared to the He masses of typical Herich SN progenitors. Finally we show that the He I lambda20851 line pseudo equivalent width of our modeled spectra for SN1994I could be used to infer the outer He shell mass which suggests that NIR spectral followup of SNe Ic will be critical for addressing the hidden helium question for a statistical sample of SNe Ic.
Shedding light on the angular momentum evolution of binary neutron star merger remnants a semianalytic model ; The main features of the gravitational dynamics of binary neutron star systems are now well established. While the inspiral can be precisely described in the postNewtonian approximation, fully relativistic magnetohydrodynamical simulations are required to model the evolution of the merger and postmerger phase. However, the interpretation of the numerical results can often be nontrivial, so that toy models become a very powerful tool. Not only do they simplify the interpretation of the postmerger dynamics, but also allow to gain insights into the physics behind it. In this work, we construct a simple toy model that is capable of reproducing the whole angular momentum evolution of the postmerger remnant, from the merger to the collapse. We validate the model against several fully generalrelativistic numerical simulations employing a genetic algorithm, and against additional constraints derived from the spectral properties of the gravitational radiation. As a result, from the remarkably close overlap between the model predictions and the reference simulations within the first milliseconds after the merger, we are able to systematically shed light on the currently open debate regarding the source of the lowfrequency peaks of the gravitational wave power spectral density. Additionally, we also present two original relations connecting the angular momentum of the postmerger remnant at merger and collapse to initial properties of the system.
Improving seasonal forecast using probabilistic deep learning ; The path toward realizing the potential of seasonal forecasting and its socioeconomic benefits depends heavily on improving general circulation model based dynamical forecasting systems. To improve dynamical seasonal forecast, it is crucial to set up forecast benchmarks, and clarify forecast limitations posed by model initialization errors, formulation deficiencies, and internal climate variability. With huge cost in generating large forecast ensembles, and limited observations for forecast verification, the seasonal forecast benchmarking and diagnosing task proves challenging. In this study, we develop a probabilistic deep neural network model, drawing on a wealth of existing climate simulations to enhance seasonal forecast capability and forecast diagnosis. By leveraging complex physical relationships encoded in climate simulations, our probabilistic forecast model demonstrates favorable deterministic and probabilistic skill compared to stateoftheart dynamical forecast systems in quasiglobal seasonal forecast of precipitation and nearsurface temperature. We apply this probabilistic forecast methodology to quantify the impacts of initialization errors and model formulation deficiencies in a dynamical seasonal forecasting system. We introduce the saliency analysis approach to efficiently identify the key predictors that influence seasonal variability. Furthermore, by explicitly modeling uncertainty using variational Bayes, we give a more definitive answer to how the El NinoSouthern Oscillation, the dominant mode of seasonal variability, modulates global seasonal predictability.
AutoPrompt Eliciting Knowledge from Language Models with Automatically Generated Prompts ; The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fillintheblanks problems e.g., cloze tests is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to write suitable prompts. To address this, we develop AutoPrompt, an automated method to create prompts for a diverse set of tasks, based on a gradientguided search. Using AutoPrompt, we show that masked language models MLMs have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning, sometimes achieving performance on par with recent stateoftheart supervised models. We also show that our prompts elicit more accurate factual knowledge from MLMs than the manually created prompts on the LAMA benchmark, and that MLMs can be used as relation extractors more effectively than supervised relation extraction models. These results demonstrate that automatically generated prompts are a viable parameterfree alternative to existing probing methods, and as pretrained LMs become more sophisticated and capable, potentially a replacement for finetuning.
Compact stellar model in Tolman spacetime in presence of pressure anisotropy ; In this paper, we develop a new relativistic compact stellar model for a spherically symmetric anisotropic matter distribution. The model has been obtained through generating a new class of solutions by invoking the Tolman em ansatz for one of the metric potentials grr and a physically reasonable selective profile of radial pressure. We have matched our obtained interior solution to the Schwarzschild exterior spacetime over the bounding surface of the compact star. These matching conditions together with the condition of vanishing the radial pressure across the boundary of the star have been utilized to determine the model parameters. We have shown that the central pressure of the star depends on the parameter p0. We have estimated the range of p0 by using the recent data of compact stars 4U 160852 and Vela X1. The effect of p0 on different physical parameters e.g., pressure anisotropy, the subliminal velocity of sound, relativistic adiabatic index etc. have also been discussed. The developed model of the compact star is elaborately discussed both analytically and graphically to justify that it satisfies all the criteria demanded a realistic star. From our analysis, we have shown that the effect of anisotropy becomes small for higher values of p0. The massradius MR relationship which indicates the maximum mass admissible for observed pulsars for a given surface density has also been investigated in our model. Moreover, the variation of radius and mass with central density has been shown which allow us to estimate central density for a given radius or mass of a compact star.
GPRbased Model Reconstruction System for Underground Utilities Using GPRNet ; Ground Penetrating Radar GPR is one of the most important nondestructive evaluation NDE instruments to detect and locate underground objects i.e., rebars, utility pipes. Many previous researches focus on GPR imagebased feature detection only, and none can process sparse GPR measurements to successfully reconstruct a very fine and detailed 3D model of underground objects for better visualization. To address this problem, this paper presents a novel robotic system to collect GPR data, localize the underground utilities, and reconstruct the underground objects' dense point cloud model. This system is composed of three modules 1 visualinertialbased GPR data collection module, which tags the GPR measurements with positioning information provided by an omnidirectional robot; 2 a deep neural network DNN migration module to interpret the raw GPR Bscan image into a crosssection of object model; 3 a DNNbased 3D reconstruction module, i.e., GPRNet, to generate underground utility model with the fine 3D point cloud. In this paper, both the quantitative and qualitative experiment results verify our method that can generate a dense and complete point cloud model of pipeshaped utilities based on a sparse input, i.e., GPR raw data incompleteness and various noise. The experiment results on synthetic data and field test data further support the effectiveness of our approach.
HighLevel Description of Robot Architecture ; Architectural Description AD is the backbone that facilitates the implementation and validation of robotic systems. In general, current highlevel ADs reflect great variation and lead to various difficulties, including mixing ADs with implementation issues. They lack the qualities of being systematic and coherent, as well as lacking technicalrelated forms e.g., icons of faces, computer screens. Additionally, a variety of languages exist for eliciting requirements, such as objectoriented analysis methods susceptible to inconsistency e.g., those using multiple diagrams in UML and SysML. In this paper, we orient our research toward a more generic conceptualization of ADs in robotics. We apply a new modeling methodology, namely the Thinging Machine TM, to describe the architecture in robotic systems. The focus of such an application is on highlevel specification, which is one important aspect for realizing the design and implementation in such systems. TM modeling can be utilized in documentation and communication and as the first step in the system s design phase. Accordingly, sample robot architectures are reexpressed in terms of TM, thus developing 1 a static model that captures the robot s atemporal aspects, 2 a dynamic model that identifies states, and 3 a behavioral model that specifies the chronology of events in the system. This result shows a viable approach in robot modeling that determines a robot system s behavior through its static description.
Sensitivity analyses for effect modifiers not observed in the target population when generalizing treatment effects from a randomized controlled trial Assumptions, models, effect scales, data scenarios, and implementation details ; Background Randomized controlled trials are often used to inform policy and practice for broad populations. The average treatment effect ATE for a target population, however, may be different from the ATE observed in a trial if there are effect modifiers whose distribution in the target population is different that from that in the trial. Methods exist to use trial data to estimate the target population ATE, provided the distributions of treatment effect modifiers are observed in both the trial and target population an assumption that may not hold in practice. Methods The proposed sensitivity analyses address the situation where a treatment effect modifier is observed in the trial but not the target population. These methods are based on an outcome model or the combination of such a model and weighting adjustment for observed differences between the trial sample and target population. They accommodate several types of outcome models linear models including single time outcome and pre and posttreatment outcomes for additive effects, and models with log or logit link for multiplicative effects. We clarify the methods' assumptions and provide detailed implementation instructions. Illustration We illustrate the methods using an example generalizing the effects of an HIV treatment regimen from a randomized trial to a relevant target population. Conclusion These methods allow researchers and decisionmakers to have more appropriate confidence when drawing conclusions about target population effects.
SIR Selfsupervised Image Rectification via Seeing the Same Scene from Multiple Different Lenses ; Deep learning has demonstrated its power in image rectification by leveraging the representation capacity of deep neural networks via supervised training based on a largescale synthetic dataset. However, the model may overfit the synthetic images and generalize not well on realworld fisheye images due to the limited universality of a specific distortion model and the lack of explicitly modeling the distortion and rectification process. In this paper, we propose a novel selfsupervised image rectification SIR method based on an important insight that the rectified results of distorted images of a same scene from different lens should be the same. Specifically, we devise a new network architecture with a shared encoder and several prediction heads, each of which predicts the distortion parameter of a specific distortion model. We further leverage a differentiable warping module to generate the rectified images and redistorted images from the distortion parameters and exploit the intra and intermodel consistency between them during training, thereby leading to a selfsupervised learning scheme without the need for groundtruth distortion parameters or normal images. Experiments on synthetic dataset and realworld fisheye images demonstrate that our method achieves comparable or even better performance than the supervised baseline method and representative stateoftheart methods. Selfsupervised learning also improves the universality of distortion models while keeping their selfconsistency.
SemiMechanistic Bayesian Modeling of COVID19 with Renewal Processes ; We propose a general Bayesian approach to modeling epidemics such as COVID19. The approach grew out of specific analyses conducted during the pandemic, in particular an analysis concerning the effects of nonpharmaceutical interventions NPIs in reducing COVID19 transmission in 11 European countries. The model parameterizes the time varying reproduction number Rt through a regression framework in which covariates can e.g be governmental interventions or changes in mobility patterns. This allows a joint fit across regions and partial pooling to share strength. This innovation was critical to our timely estimates of the impact of lockdown and other NPIs in the European epidemics, whose validity was borne out by the subsequent course of the epidemic. Our framework provides a fully generative model for latent infections and observations deriving from them, including deaths, cases, hospitalizations, ICU admissions and seroprevalence surveys. One issue surrounding our model's use during the COVID19 pandemic is the confounded nature of NPIs and mobility. We use our framework to explore this issue. We have open sourced an R package epidemia implementing our approach in Stan. Versions of the model are used by New York State, Tennessee and Scotland to estimate the current situation and make policy decisions.
Scalarized EinsteinBornInfeldscalar Black Holes ; The phenomenon of spontaneous scalarization of ReissnerNordstrom RN black holes has recently been found in an EinsteinMaxwellscalar EMS model due to a nonminimal coupling between the scalar and Maxwell fields. Nonlinear electrodynamics, e.g., BornInfeld BI electrodynamics, generalizes Maxwell's theory in the strong field regime. Nonminimally coupling the BI field to the scalar field, we study spontaneous scalarization of an EinsteinBornInfeldscalar EBIS model in this paper. It shows that there are two types of scalarized black hole solutions, i.e., scalarized RNlike and Schwarzschildlike solutions. Although the behavior of scalarized RNlike solutions in the EBIS model is quite similar to that of scalarize solutions in the EMS model, we find that there exist significant differences between scalarized Schwarzschildlike solutions in the EBIS model and scalarized solutions in the EMS model. In particular, the domain of existence of scalarized Schwarzschildlike solutions possesses a certain region, which is composed of two branches. The branch of larger horizon area is a family of disconnected scalarized solutions, which do not bifurcate from scalarfree black holes. However, the branch of smaller horizon area may or may not bifurcate from scalarfree black holes depending on the parameters. Additionally, these two branches of scalarized solutions can be both entropically disfavored over comparable scalarfree black holes in some parameter region.
Wavepacket Modelling of Broadband ShockAssociated Noise in Supersonic Jets ; We present a twopoint model to investigate the underlying source mechanisms for broadband shockassociated noise BBSAN in shockcontaining supersonic jets. In the model presented, the generation of BBSAN is assumed to arise from the nonlinear interaction between downstreampropagating coherent structures with the quasiperiodic shock cells in the jet plume. The turbulent perturbations are represented as axiallyextended wavepackets and the shock cells are modelled as a set of stationary waveguide modes. Unlike previous BBSAN models, the physical parameters describing the hydrodynamic components are not scaled using the acoustic field. Instead, the characteristics of both the turbulent and shock components are educed from largeeddy simulation and particle image velocimetry datasets. Apart from using extracted data, a reducedorder description of the wavepacket structure is obtained using parabolised stability equations PSE. The validity of the model is tested by comparing farfield sound pressure level predictions to azimuthallydecomposed experimental acoustic data from a cold Mach 1.5 underexpanded jet. At polar angles and frequencies where BBSAN dominates, good agreement in spectral shape and sound amplitude is observed for the first three azimuthal modes. Encouraging comparisons of the radiated noise spectra, in both frequency and amplitude, reinforce the suitability of using reducedorder linear wavepacket sources for predicting BBSAN peaks. On the other hand, the mismatch in sound amplitude at interpeak frequencies reveals the role of wavepacket jitter in the underlying sound generating mechanism.
Intrinsic Dimensionality Explains the Effectiveness of Language Model FineTuning ; Although pretrained language models can be finetuned to produce stateoftheart results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime. Why can we use relatively vanilla gradient descent algorithms e.g., without strong regularization to tune a model with hundreds of millions of parameters on datasets with only hundreds or thousands of labeled examples In this paper, we argue that analyzing finetuning through the lens of intrinsic dimension provides us with empirical and theoretical intuitions to explain this remarkable phenomenon. We empirically show that common pretrained models have a very low intrinsic dimension; in other words, there exists a low dimension reparameterization that is as effective for finetuning as the full parameter space. For example, by optimizing only 200 trainable parameters randomly projected back into the full space, we can tune a RoBERTa model to achieve 90 of the full parameter performance levels on MRPC. Furthermore, we empirically show that pretraining implicitly minimizes intrinsic dimension and, perhaps surprisingly, larger models tend to have lower intrinsic dimension after a fixed number of pretraining updates, at least in part explaining their extreme effectiveness. Lastly, we connect intrinsic dimensionality with low dimensional task representations and compression based generalization bounds to provide intrinsicdimensionbased generalization bounds that are independent of the full parameter count.
Residual Mean Field Model of Valence Quarks in the Nucleon ; We develop a nonperturbative model for valence parton distribution functions PDFs based on the mean field interactions of valence quarks in the nucleonic interior. The main motivation for the model is to obtain a mean field description of the valence quarks as a baseline to study the short range quarkquark interactions that generate the high x tail of PDFs. The model is based on the separation of the valence threequark cluster and residual system in the nucleon. Then the nucleon structure function is calculated within the effective lightfront diagrammatic approach introducing nonperturbative lightfront valence quark and residual wave functions. Within the model a new relation is obtained between the position, xp, of the peak of xqVx distribution of the valence quark and the effective mass of the residual system, mR, in the form xp approx 1over 4 1mRover mN at starting Q2. This relation explains the difference in the peak positions for d and u quarks through the expected difference of residual masses for valence d and u quark distributions. The parameters of the model are fixed by fitting the calculated valence quark distributions to the phenomenological PDFs. This allowed us to estimate the overall mean field contribution in baryonic and momentum sum rules for valence d and u quarks. Finally, the evaluated parameters of the nonperturbative wave functions of valence 3qcluster and residual system can be used in calculation of other quantities such as nucleon form factors, generalized partonic and transverse momentum distributions.
Modeling compact binary signals and instrumental glitches in gravitational wave data ; Transient nongaussian noise in gravitational wave detectors, commonly referred to as glitches, pose challenges for inference of the astrophysical properties of detected signals when the two are coincident in time. Current analyses aim towards modeling and subtracting the glitches from the data using a flexible, morphologyindependent model in terms of sinegaussian wavelets before the signal source properties are inferred using templates for the compact binary signal. We present a new analysis of gravitational wave data that contain both a signal and glitches by simultaneously modeling the compact binary signal in terms of templates and the instrumental glitches using sinegaussian wavelets. The model for the glitches is generic and can thus be applied to a wide range of glitch morphologies without any special tuning. The simultaneous modeling of the astrophysical signal with templates allows us to efficiently separate the signal from the glitches, as we demonstrate using simulated signals injected around real O2 glitches in the two LIGO detectors. We show that our new proposed analysis can separate overlapping glitches and signals, estimate the compact binary parameters, and provide readytouse glitchsubtracted data for downstream inference analyses.
Model reduction techniques for the computation of extended Markov parameterizations for generalized Langevin equations ; The generalized Langevin equation is a model for the motion of coarsegrained particles where dissipative forces are represented by a memory term. The numerical realization of such a model requires the implementation of a stochastic delaydifferential equation and the estimation of a corresponding memory kernel. Here we develop a new approach for computing a datadriven Markov model for the motion of the particles, given equidistant samples of their velocity autocorrelation function. Our method bypasses the determination of the underlying memory kernel by representing it via up to about twenty auxiliary variables. The algorithm is based on a sophisticated variant of the Prony method for exponential interpolation and employs the Positive Real Lemma from model reduction theory to extract the associated Markov model. We demonstrate the potential of this approach for the test case of anomalous diffusion, where data are given analytically, and then apply our method to velocity autocorrelation data of molecular dynamics simulations of a colloid in a LennardJones fluid. In both cases, the VACF and the memory kernel can be reproduced very accurately. Moreover, we show that the algorithm can also handle input data with large statistical noise. We anticipate that it will be a very useful tool in future studies that involve dynamic coarsegraining of complex soft matter systems.
A Bayesian neural network predicts the dissolution of compact planetary systems ; Despite over three hundred years of effort, no solutions exist for predicting when a general planetary configuration will become unstable. We introduce a deep learning architecture to push forward this problem for compact systems. While current machine learning algorithms in this area rely on scientistderived instability metrics, our new technique learns its own metrics from scratch, enabled by a novel internal structure inspired from dynamics theory. Our Bayesian neural network model can accurately predict not only if, but also when a compact planetary system with three or more planets will go unstable. Our model, trained directly from short Nbody time series of raw orbital elements, is more than two orders of magnitude more accurate at predicting instability times than analytical estimators, while also reducing the bias of existing machine learning algorithms by nearly a factor of three. Despite being trained on compact resonant and nearresonant threeplanet configurations, the model demonstrates robust generalization to both nonresonant and higher multiplicity configurations, in the latter case outperforming models fit to that specific set of integrations. The model computes instability estimates up to five orders of magnitude faster than a numerical integrator, and unlike previous efforts provides confidence intervals on its predictions. Our inference model is publicly available in the SPOCK package, with training code opensourced.
Variational Multiscale Superresolution A datadriven approach for reconstruction and predictive modeling of unresolved physics ; The variational multiscale VMS formulation formally segregates the evolution of the coarsescales from the finescales. VMS modeling requires the approximation of the impact of the fine scales in terms of the coarse scales. In linear problems, our formulation reduces the problem of learning the subscales to learning the projected element Green's function basis coefficients. For the purpose of this approximation, a special neuralnetwork structure the variational superresolution NN VSRNN is proposed. The VSRNN constructs a superresolved model of the unresolved scales as a sum of the products of individual functions of coarse scales and physicsinformed parameters. Combined with a set of locally nondimensional features obtained by normalizing the input coarsescale and output subscale basis coefficients, the VSRNN provides a general framework for the discovery of closures for both the continuous and the discontinuous Galerkin discretizations. By training this model on a sequence of L2projected data and using the subscale to compute the continuous Galerkin subgrid terms, and the superresolved state to compute the discontinuous Galerkin fluxes, we improve the optimality and the accuracy of these methods for the convectiondiffusion problem, linear advection and turbulent channel flow. Finally, we demonstrate that in the investigated examples the present model allows generalization to outofsample initial conditions and Reynolds numbers. Perspectives are provided on datadriven closure modeling, limitations of the present approach, and opportunities for improvement.
An Analysis Of Protected Health Information Leakage In DeepLearning Based DeIdentification Algorithms ; The increasing complexity of algorithms for analyzing medical data, including deidentification tasks, raises the possibility that complex algorithms are learning not just the general representation of the problem, but specifics of given individuals within the data. Modern legal frameworks specifically prohibit the intentional or accidental distribution of patient data, but have not addressed this potential avenue for leakage of such protected health information. Modern deep learning algorithms have the highest potential of such leakage due to complexity of the models. Recent research in the field has highlighted such issues in nonmedical data, but all analysis is likely to be data and algorithm specific. We, therefore, chose to analyze a stateoftheart freetext deidentification algorithm based on LSTM Long ShortTerm Memory and its potential in encoding any individual in the training set. Using the i2b2 Challenge Data, we trained, then analyzed the model to assess whether the output of the LSTM, before the compression layer of the classifier, could be used to estimate the membership of the training data. Furthermore, we used different attacks including membership inference attack method to attack the model. Results indicate that the attacks could not identify whether members of the training data were distinguishable from nonmembers based on the model output. This indicates that the model does not provide any strong evidence into the identification of the individuals in the training data set and there is not yet empirical evidence it is unsafe to distribute the model for general use.
Effects of thermal emission on Chandrasekhar's semiinfinite diffuse reflection problem ; Context The analytical results of Chandrasekhar's semiinfinite diffuse reflection problem is crucial in the context of stellar or planetary atmosphere. However, the atmospheric emission effect was not taken into account in this model, and the solutions are applicable only for diffusely scattering atmosphere in absence of emission. Aim We extend the model of semiinfinite diffuse reflection problem by including the effects of thermal emission BT , and present how this affects Chandrasekhar's analytical end results. Hence, we aim to generalize Chandrasekhar's model to provide a complete picture of this problem. Method We use Invariance Principle Method to find the radiative transfer equation accurate for diffuse reflection in presence of BT . Then we derive the modified scattering function Smu,phi; mu0 , phi0 for different kind of phase functions. Results We find that, the scattering function Smu, phi; mu0 , phi0 as well as diffusely reflected specific intensity I0, mu; mu0 for different phase functions are modified due to the emission BT from layer tau 0. In both cases, BT is added to the results of only scattering case derived by Chandrasekhar, with some multiplicative factors. Thus the diffusely reflected spectra will be enriched and carries the temperature information of tau 0 layer. As the effects are additive in nature, hence our model reduces to the subcase of Chandrasekhar's scattering model in case of BT 0. We conclude that our generalized model provides more accurate results due to the inclusion of the thermal emission effect in Chandrasekhar's semiinfinite atmosphere problem.
NemaNet A convolutional neural network model for identification of nematodes soybean crop in brazil ; Phytoparasitic nematodes or phytonematodes are causing severe damage to crops and generating largescale economic losses worldwide. In soybean crops, annual losses are estimated at 10.6 of world production. Besides, identifying these species through microscopic analysis by an expert with taxonomy knowledge is often laborious, timeconsuming, and susceptible to failure. In this perspective, robust and automatic approaches are necessary for identifying phytonematodes capable of providing correct diagnoses for the classification of species and subsidizing the taking of all control and prevention measures. This work presents a new public data set called NemaDataset containing 3,063 microscopic images from five nematode species with the most significant damage relevance for the soybean crop. Additionally, we propose a new Convolutional Neural Network CNN model defined as NemaNet and a comparative assessment with thirteen popular models of CNNs, all of them representing the state of the art classification and recognition. The general average calculated for each model, on a fromscratch training, the NemaNet model reached 96.99 accuracy, while the best evaluation fold reached 98.03. In training with transfer learning, the average accuracy reached 98.88. The best evaluation fold reached 99.34 and achieve an overall accuracy improvement over 6.83 and 4.1, for fromscratch and transfer learning training, respectively, when compared to other popular models.
Revisiting Model's Uncertainty and Confidences for Adversarial Example Detection ; Securitysensitive applications that rely on Deep Neural Networks DNNs are vulnerable to small perturbations that are crafted to generate Adversarial ExamplesAEs. The AEs are imperceptible to humans and cause DNN to misclassify them. Many defense and detection techniques have been proposed. Model's confidences and Dropout, as a popular way to estimate the model's uncertainty, have been used for AE detection but they showed limited success against black and graybox attacks. Moreover, the stateoftheart detection techniques have been designed for specific attacks or broken by others, need knowledge about the attacks, are not consistent, increase model parameters overhead, are timeconsuming, or have latency in inference time. To trade off these factors, we revisit the model's uncertainty and confidences and propose a novel unsupervised ensemble AE detection mechanism that 1 uses the uncertainty method called SelectiveNet, 2 processes model layers outputs, i.e.feature maps, to generate new confidence probabilities. The detection method is called Selective and Feature based Adversarial Detection SFAD. Experimental results show that the proposed approach achieves better performance against black and graybox attacks than the stateoftheart methods and achieves comparable performance against whitebox attacks. Moreover, results show that SFAD is fully robust against High Confidence Attacks HCAs for MNIST and partially robust for CIFAR10 datasets.
Bounded Invariant Checking for Stateflow Programs ; Stateflow models are complex software models, often used as part of safetycritical software solutions designed with Matlab Simulink. They incorporate design principles that are typically very hard to verify formally. In particular, the standard exhaustive formal verification techniques are unlikely to scale well for the complex designs that are developed in industry. Furthermore, the Stateflow language lacks a formal semantics, which additionally hinders the formal analysis. To address these challenges, we lay here the foundations of a scalable technique for provably correct formal analysis of Stateflow models, with respect to invariant properties, based on bounded model checking BMC over symbolic executions. The crux of our technique is i a representation of the state space of Stateflow models as a symbolic transition system STS over the symbolic configurations of the model, as the basis for BMC, and ii application of incremental BMC, to generate verification results after each unrolling of the nextstate relation of the transition system. To this end, we develop a symbolic structural operational semantics SSOS for Stateflow, starting from an existing structural operational semantics SOS, and show the preservation of invariant properties between the two. Next, we define bounded invariant checking for STS over symbolic configurations as a satisfiability problem. We develop an automated procedure for generating the initial and nextstate predicates of the STS, and propose an encoding scheme of the bounded invariant checking problem as a set of constraints, ready for automated analysis with standard, offtheshelf satisfiability solvers. Finally, we present preliminary performance results by applying our tool on an illustrative example.
An Empirical Study on the Usage of BERT Models for Code Completion ; Code completion is one of the main features of modern Integrated Development Environments IDEs. Its objective is to speed up code writing by predicting the next code tokens the developer is likely to write. Research in this area has substantially bolstered the predictive performance of these techniques. However, the support to developers is still limited to the prediction of the next few tokens to type. In this work, we take a step further in this direction by presenting a largescale empirical study aimed at exploring the capabilities of stateoftheart deep learning DL models in supporting code completion at different granularity levels, including single tokens, one or multiple entire statements, up to entire code blocks e.g., the iterated block of a for loop. To this aim, we train and test several adapted variants of the recently proposed RoBERTa model, and evaluate its predictions from several perspectives, including i metrics usually adopted when assessing DL generative models i.e., BLEU score and Levenshtein distance; ii the percentage of perfect predictions i.e., the predicted code snippets that match those written by developers; and iii the semantic equivalence of the generated code as compared to the one written by developers. The achieved results show that BERT models represent a viable solution for code completion, with perfect predictions ranging from 7, obtained when asking the model to guess entire blocks, up to 58, reached in the simpler scenario of few tokens masked from the same code statement.
Fixed d Renormalization Group Analysis of Conserved Surface Roughening ; Conserved surface roughening represents a special case of interface dynamics where the total height of the interface is conserved. Recently, it was suggested F. Caballero et al., Phys. Rev. Lett. 121, 020601 2018 that the original continuum model known as Conserved KardarParisiZhang'CKPZ equation is incomplete, as additional nonlinearity is not forbidden by any symmetry in d 1. In this work, we perform detailed fieldtheoretic renormalization group RG analysis of a general stochastic model describing conserved surface roughening. Systematic power counting reveals additional marginal interaction at the upper critical dimension, which appears also in the context of molecular beam epitaxy. Depending on the origin of the surface particle's mobility, the resulting model shows two different scaling regimes; If the particles move mainly due to the gravity, the leading dispersion law is omega sim k2, and the meanfield approximation describing a flat interface is exact in any spatial dimension. On the other hand, if the particles move mainly due to the surface curvature, the interface becomes rough with the meanfield dispersion law omega sim k4 , and the corrections to scaling exponents must be taken into account. We show, that the latter model consist of two subclass of models that are decoupled in all orders of perturbation theory. Moreover, our RG analysis of the general model reveals that the universal scaling is described by a rougher interface than CKPZ universality class. The universal exponents are derived within the oneloop approximation in both fixed d and varepsilonexpansion schemes, and their relation is discussed. We point out all important details behind these two schemes which are often overlooked in the literature, and their misinterpretation might lead to inconsistent results.
Reduced Precision Strategies for Deep Learning A High Energy Physics Generative Adversarial Network Use Case ; Deep learning is finding its way into high energy physics by replacing traditional Monte Carlo simulations. However, deep learning still requires an excessive amount of computational resources. A promising approach to make deep learning more efficient is to quantize the parameters of the neural networks to reduced precision. Reduced precision computing is extensively used in modern deep learning and results to lower execution inference time, smaller memory footprint and less memory bandwidth. In this paper we analyse the effects of low precision inference on a complex deep generative adversarial network model. The use case which we are addressing is calorimeter detector simulations of subatomic particle interactions in accelerator based high energy physics. We employ the novel Intel low precision optimization tool iLoT for quantization and compare the results to the quantized model from TensorFlow Lite. In the performance benchmark we gain a speedup of 1.73x on Intel hardware for the quantized iLoT model compared to the initial, not quantized, model. With different physicsinspired selfdeveloped metrics, we validate that the quantized iLoT model shows a lower loss of physical accuracy in comparison to the TensorFlow Lite model.
Counterfactual Explanation with MultiAgent Reinforcement Learning for Drug Target Prediction ; Motivation Many highperformance DTA models have been proposed, but they are mostly blackbox and thus lack human interpretability. Explainable AI XAI can make DTA models more trustworthy, and can also enable scientists to distill biological knowledge from the models. Counterfactual explanation is one popular approach to explaining the behaviour of a deep neural network, which works by systematically answering the question How would the model output change if the inputs were changed in this way. Most counterfactual explanation methods only operate on single input data. It remains an open problem how to extend counterfactualbased XAI methods to DTA models, which have two inputs, one for drug and one for target, that also happen to be discrete in nature. Methods We propose a multiagent reinforcement learning framework, MultiAgent Counterfactual Drug target binding Affinity MACDA, to generate counterfactual explanations for the drugprotein complex. Our proposed framework provides humaninterpretable counterfactual instances while optimizing both the input drug and target for counterfactual generation at the same time. Results We benchmark the proposed MACDA framework using the Davis dataset and find that our framework produces more parsimonious explanations with no loss in explanation validity, as measured by encoding similarity and QED. We then present a case study involving ABL1 and Nilotinib to demonstrate how MACDA can explain the behaviour of a DTA model in the underlying substructure interaction between inputs in its prediction, revealing mechanisms that align with prior domain knowledge.