text
stringlengths
62
2.94k
Impact of Pretraining Term Frequencies on FewShot Reasoning ; Pretrained Language Models LMs have demonstrated ability to perform numerical reasoning by extrapolating from a few examples in fewshot settings. However, the extent to which this extrapolation relies on robust reasoning is unclear. In this paper, we investigate how well these models reason with terms that are less frequent in the pretraining data. In particular, we examine the correlations between the model performance on test instances and the frequency of terms from those instances in the pretraining data. We measure the strength of this correlation for a number of GPTbased language models pretrained on the Pile dataset on various numerical deduction tasks e.g., arithmetic and unit conversion. Our results consistently demonstrate that models are more accurate on instances whose terms are more prevalent, in some cases above 70 absolute more accurate on the top 10 frequent terms in comparison to the bottom 10. Overall, although LMs exhibit strong performance at fewshot numerical reasoning tasks, our results raise the question of how much models actually generalize beyond pretraining data, and we encourage researchers to take the pretraining data into account when interpreting evaluation results.
Quantifying Memorization Across Neural Language Models ; Large language models LMs have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized training data verbatim. This is undesirable because memorization violates privacy exposing user data, degrades utility repeated easytomemorize text is often low quality, and hurts fairness some texts are memorized over others. We describe three loglinear relationships that quantify the degree to which LMs emit memorized training data. Memorization significantly grows as we increase 1 the capacity of a model, 2 the number of times an example has been duplicated, and 3 the number of tokens of context used to prompt the model. Surprisingly, we find the situation becomes more complicated when generalizing these results across model families. On the whole, we find that memorization in LMs is more prevalent than previously believed and will likely get worse as models continues to scale, at least without active mitigations.
KINet Unsupervised Forward Models for Robotic Pushing Manipulation ; Objectcentric representation is an essential abstraction for forward prediction. Most existing forward models learn this representation through extensive supervision e.g., object class and bounding box although such groundtruth information is not readily accessible in reality. To address this, we introduce KINet Keypoint Interaction Network an endtoend unsupervised framework to reason about object interactions based on a keypoint representation. Using visual observations, our model learns to associate objects with keypoint coordinates and discovers a graph representation of the system as a set of keypoint embeddings and their relations. It then learns an actionconditioned forward model using contrastive estimation to predict future keypoint states. By learning to perform physical reasoning in the keypoint space, our model automatically generalizes to scenarios with a different number of objects, novel backgrounds, and unseen object geometries. Experiments demonstrate the effectiveness of our model in accurately performing forward prediction and learning plannable objectcentric representations for downstream robotic pushing manipulation tasks.
Coarsegrained MoriZwanzig dynamics in a timenonlocal stationaryaction framework ; Coarsegrained CG models are simplified representations of soft matter systems that are commonly employed to overcome size and time limitations in computational studies. Many approaches have been developed to construct and parametrise such effective models for a variety of systems of natural as well as artificial origin. However, while extremely accurate in reproducing the stationary and equilibrium observables obtained with more detailed representations, CG models generally fail to preserve the original time scales of the reference system, and hence its dynamical properties. In order to improve our understanding of the impact of coarsegraining on the model system dynamics, we here formulate the MoriZwanzig generalised Langevin equations GLEs of motion of a CG model in terms of a time nonlocal stationaryaction principle. The latter is employed in combination with a datadriven optimisation strategy to determine the parameters of the GLE. We apply this approach to a system of water molecules in standard thermodynamical conditions, showing that it can substantially improve the dynamical features of the corresponding CG model.
Absolute ZeroShot Learning ; Considering the increasing concerns about data copyright and privacy issues, we present a novel Absolute ZeroShot Learning AZSL paradigm, i.e., training a classifier with zero real data. The key innovation is to involve a teacher model as the data safeguard to guide the AZSL model training without data leaking. The AZSL model consists of a generator and student network, which can achieve datefree knowledge transfer while maintaining the performance of the teacher network. We investigate blackbox' and whitebox' scenarios in AZSL task as different levels of model security. Besides, we also provide discussion of teacher model in both inductive and transductive settings. Despite embarrassingly simple implementations and datamissing disadvantages, our AZSL framework can retain stateoftheart ZSL and GZSL performance under the whitebox' scenario. Extensive qualitative and quantitative analysis also demonstrates promising results when deploying the model under blackbox' scenario.
Stability of the smectic phase in arrays of parallel quantum wires ; Using bosonization, we study a microscopic model of parallel quantum wires constructed from two dimensional Dirac fermions in the presence of periodic topological domain walls. The model accounts for the lateral spread of the wavefunctions ell in the transverse direction to the wires. The gapless modes confined to each domain wall are shown to form Luttinger liquids, which realize a well known smectic nonFermi liquid fixed point when interwire Coulomb interactions are taken into account. Perturbative studies on phenomenological models have shown that the smectic fixed point is unstable towards a variety of phases such as superconductivity, stripe, smectic and Fermi liquid phases. Here, we show that the considered microscopic model leads to a phase diagram with only smectic metal and Fermi liquid phases. The smectic metal phase is stable in the ideal quantum wire limit ellto0. For finite ell, we find a critical Coulomb coupling alphac separating the strong coupling smectic metal from a weak coupling Fermi liquid phase. We conjecture that the absence of superconductivity should be a generic feature of similar microscopic models. Finally, we discuss the physical realization of this model with moire heterostructures.
Attention Enables Zero Approximation Error ; Deep learning models have been widely applied in various aspects of daily life. Many variant models based on deep learning structures have achieved even better performances. Attentionbased architectures have become almost ubiquitous in deep learning structures. Especially, the transformer model has now defeated the convolutional neural network in image classification tasks to become the most widely used tool. However, the theoretical properties of attentionbased models are seldom considered. In this work, we show that with suitable adaptations, the singlehead selfattention transformer with a fixed number of transformer encoder blocks and free parameters is able to generate any desired polynomial of the input with no error. The number of transformer encoder blocks is the same as the degree of the target polynomial. Even more exciting, we find that these transformer encoder blocks in this model do not need to be trained. As a direct consequence, we show that the singlehead selfattention transformer with increasing numbers of free parameters is universal. These surprising theoretical results clearly explain the outstanding performances of the transformer model and may shed light on future modifications in real applications. We also provide some experiments to verify our theoretical result.
Visual Speech Recognition for Multiple Languages in the Wild ; Visual speech recognition VSR aims to recognize the content of speech based on lip movements, without relying on the audio stream. Advances in deep learning and the availability of large audiovisual datasets have led to the development of much more accurate and robust VSR models than ever before. However, these advances are usually due to the larger training sets rather than the model design. Here we demonstrate that designing better models is equally as important as using larger training sets. We propose the addition of predictionbased auxiliary tasks to a VSR model, and highlight the importance of hyperparameter optimization and appropriate data augmentations. We show that such a model works for different languages and outperforms all previous methods trained on publicly available datasets by a large margin. It even outperforms models that were trained on nonpublicly available datasets containing up to to 21 times more data. We show, furthermore, that using additional training data, even in other languages or with automatically generated transcriptions, results in further improvement.
Formalizing Oracle Trust Models for blockchainbased business applications. An example from the supply chain sector ; Blockchain technology truly opened the gate to a wave of unparalleled innovations; however, despite the rapidly growing load of hype, the integration into the business, apart from a few applications, seems to be coming at a slower rate. One reason for that delay may be the need in the realworld applications for the socalled trust model. Trust models are rarely mentioned in blockchain application proposals despite their importance, which creates skepticism about their successful developments. To promote trust model implementation and help practitioners in its redaction, this article provides an outline of what a trust model is, why it is essential, and an example of how it is elaborated. The discussed example comes from a case study of a dairy company that implemented blockchain for the traceability of its products. Despite being tailored on a traceability project, the redaction and elements of the trust model, with few adjustments, could be easily readapted for other applications.
Nonlinear Model Predictive Control and System Identification for a Dualhormone Artificial Pancreas ; In this work, we present a switching nonlinear model predictive control NMPC algorithm for a dualhormone artificial pancreas AP, and we use maximum likelihood estimation MLE to identify model parameters. A dualhormone AP consists of a continuous glucose monitor CGM, a control algorithm, an insulin pump, and a glucagon pump. The AP is designed with a heuristic to switch between insulin and glucagon as well as statedependent constraints. We extend an existing glucoregulatory model with glucagon and exercise for simulation, and we use a simpler model for control. We test the AP NMPC and MLE using in silico numerical simulations on 50 virtual people with type 1 diabetes. The system is identified for each virtual person based on data generated with the simulation model. The simulations show a mean of 89.3 time in range 3.910 mmolL and no hypoglycemic events.
An Empirical Study on Explanations in OutofDomain Settings ; Recent work in Natural Language Processing has focused on developing approaches that extract faithful explanations, either via identifying the most important tokens in the input i.e. posthoc explanations or by designing inherently faithful models that first select the most important tokens and then use them to predict the correct label i.e. selectthenpredict models. Currently, these approaches are largely evaluated on indomain settings. Yet, little is known about how posthoc explanations and inherently faithful models perform in outofdomain settings. In this paper, we conduct an extensive empirical study that examines 1 the outofdomain faithfulness of posthoc explanations, generated by five feature attribution methods; and 2 the outofdomain performance of two inherently faithful models over six datasets. Contrary to our expectations, results show that in many cases outofdomain posthoc explanation faithfulness measured by sufficiency and comprehensiveness is higher compared to indomain. We find this misleading and suggest using a random baseline as a yardstick for evaluating posthoc explanation faithfulness. Our findings also show that selectthen predict models demonstrate comparable predictive performance in outofdomain settings to fulltext trained models.
Flat FLRW Universe in logarithmic symmetric teleparallel gravity with observational constraints ; In this paper, we investigate the homogeneous and isotropic flat FLRW Universe in the logarithmic form of fleft Qright gravity, where Q is the nonmetricity scalar, specifically, fleft Qright alpha beta log left Qright , where alpha and beta are free model parameters. In this study, we consider a parametrization of the Hubble parameter as Hleft zright eta left z1gamma 1right , where gamma and eta are modelfree parameters which are constrained by an R2test from 57 points of the Hubble datasets in the redshift range 0.07z2.36. Further, we investigate the physical properties of the model. We analyze the energy conditions to check the compatibility of the model. We found the SEC is violated for the logarithmic form of fleft Qright gravity due to the reality that the Universe in an accelerating phase. Finally, we discuss some important cosmological parameters in this context to compare our model with dark energy models such as jerk parameter and statefinder parameters.
DreamingV2 Reinforcement Learning with Discrete World Models without Reconstruction ; The present paper proposes a novel reinforcement learning method with world models, DreamingV2, a collaborative extension of DreamerV2 and Dreaming. DreamerV2 is a cuttingedge modelbased reinforcement learning from pixels that uses discrete world models to represent latent states with categorical variables. Dreaming is also a form of reinforcement learning from pixels that attempts to avoid the autoencoding process in general world model training by involving a reconstructionfree contrastive learning objective. The proposed DreamingV2 is a novel approach of adopting both the discrete representation of DreamingV2 and the reconstructionfree objective of Dreaming. Compared to DreamerV2 and other recent modelbased methods without reconstruction, DreamingV2 achieves the best scores on five simulated challenging 3D robot arm tasks. We believe that DreamingV2 will be a reliable solution for robot learning since its discrete representation is suitable to describe discontinuous environments, and the reconstructionfree fashion well manages complex vision observations.
Numerically Probing the Universal Operator Growth Hypothesis ; Recently, a hypothesis on the complexity growth of unitarily evolving operators was presented. This hypothesis states that in generic, nonintegrable manybody systems the socalled Lanczos coefficients associated with an autocorrelation function grow asymptotically linear, with a logarithmic correction in onedimensional systems. In contrast, the growth is expected to be slower in integrable or free models. In the paper at hand, we numerically test this hypothesis for a variety of exemplary systems, including 1d and 2d Ising models as well as 1d Heisenberg models. While we find the hypothesis to be practically fulfilled for all considered Ising models, the onset of the hypothesized universal behavior could not be observed in the attainable numerical data for the Heisenberg model. The proposed linear bound on operator growth eventually stems from geometric arguments involving the locality of the Hamiltonian as well as the lattice configuration. We investigate such a geometric bound and find that it is not sharply achieved for any considered model.
GammaShadowed TwoRay with Diffuse Power Composite Fading Model ; In this paper, a novel gammashadowed tworay with diffuse power GSTWDP composite fading model is proposed. The model is intended for modeling propagation in the emerging wireless networks working at millimeter wave mmWave frequencies, and is obtained as a combination of TWDP distribution for description of multipath effects and gamma distribution for modeling variations due to shadowing. After derivation of the exact probability density function PDF, cumulative distribution function CDF and moment generating functions MGF expressions are obtained. Proposed model is verified by comparing the analytically obtained results with those measured at 28 GHz and reported in literature. Two upper bound average symbol error probability ASEP expressions are then derived for Mary rectangular quadrature amplitude modulation RQAM by employing Chernoff and Chiani approximations of Gaussian Qfunction, and are used to investigate relationship between GSTWDP parameters and system performance. All the results are verified by MonteCarlo simulation.
Diversifying Agent's Behaviors in Interactive Decision Models ; Modelling other agents' behaviors plays an important role in decision models for interactions among multiple agents. To optimise its own decisions, a subject agent needs to model what other agents act simultaneously in an uncertain environment. However, modelling insufficiency occurs when the agents are competitive and the subject agent can not get full knowledge about other agents. Even when the agents are collaborative, they may not share their true behaviors due to their privacy concerns. In this article, we investigate into diversifying behaviors of other agents in the subject agent's decision model prior to their interactions. Starting with prior knowledge about other agents' behaviors, we use a linear reduction technique to extract representative behavioral features from the known behaviors. We subsequently generate their new behaviors by expanding the features and propose two diversity measurements to select topK behaviors. We demonstrate the performance of the new techniques in two wellstudied problem domains. This research will contribute to intelligent systems dealing with unknown unknowns in an open artificial intelligence world.
Automatic selection by penalized asymmetric Lqnorm in an highdimensional model with grouped variables ; The paper focuses on the automatic selection of the grouped explanatory variables in an highdimensional model, when the model errors are asymmetric. After introducing the model and notations, we define the adaptive group LASSO expectile estimator for which we prove the oracle properties the sparsity and the asymptotic normality. Afterwards, the results are generalized by considering the asymmetric Lqnorm loss function. The theoretical results are obtained in several cases with respect to the number of variable groups. This number can be fixed or dependent on the sample size n, with the possibility that it is of the same order as n. Note that these new estimators allow us to consider weaker assumptions on the data and on the model errors than the usual ones. Simulation study demonstrates the competitive performance of the proposed penalized expectile regression, especially when the samples size is close to the number of explanatory variables and model errors are asymmetrical. An application on air pollution data is considered.
Towards OnDevice AI and Blockchain for 6G enabled Agricultural Supplychain Management ; 6G envisions artificial intelligence AI powered solutions for enhancing the qualityofservice QoS in the network and to ensure optimal utilization of resources. In this work, we propose an architecture based on the combination of unmanned aerial vehicles UAVs, AI and blockchain for agricultural supplychain management with the purpose of ensuring traceability, transparency, tracking inventories and contracts. We propose a solution to facilitate ondevice AI by generating a roadmap of models with various resourceaccuracy tradeoffs. A fully convolutional neural network FCN model is used for biomass estimation through images captured by the UAV. Instead of a single compressed FCN model for deployment on UAV, we motivate the idea of iterative pruning to provide multiple taskspecific models with various complexities and accuracy. To alleviate the impact of flight failure in a 6G enabled dynamic UAV network, the proposed model selection strategy will assist UAVs to update the model based on the runtime resource requirements.
Training Protocol Matters Towards Accurate Scene Text Recognition via Training Protocol Searching ; The development of scene text recognition STR in the era of deep learning has been mainly focused on novel architectures of STR models. However, training protocol i.e., settings of the hyperparameters involved in the training of STR models, which plays an equally important role in successfully training a good STR model, is underexplored for scene text recognition. In this work, we attempt to improve the accuracy of existing STR models by searching for optimal training protocol. Specifically, we develop a training protocol search algorithm, based on a newly designed search space and an efficient search algorithm using evolutionary optimization and proxy tasks. Experimental results show that our searched training protocol can improve the recognition accuracy of mainstream STR models by 2.73.9. In particular, with the searched training protocol, TRBANet achieves 2.1 higher accuracy than the stateoftheart STR model i.e., EFIFSTR, while the inference speed is 2.3x and 3.7x faster on CPU and GPU respectively. Extensive experiments are conducted to demonstrate the effectiveness of the proposed method and the generalization ability of the training protocol found by our search method. Code is available at httpsgithub.comVDIGPKUSTRTPSearch.
Better Quality Estimation for Low Resource Corpus Mining ; Quality Estimation QE models have the potential to change how we evaluate and maybe even train machine translation models. However, these models still lack the robustness to achieve general adoption. We show that Stateoftheart QE models, when tested in a Parallel Corpus Mining PCM setting, perform unexpectedly bad due to a lack of robustness to outofdomain examples. We propose a combination of multitask training, data augmentation and contrastive learning to achieve better and more robust QE performance. We show that our method improves QE performance significantly in the MLQE challenge and the robustness of QE models when tested in the Parallel Corpus Mining setup. We increase the accuracy in PCM by more than 0.80, making it on par with stateoftheart PCM methods that use millions of sentence pairs to train their models. In comparison, we use a thousand times less data, 7K parallel sentences in total, and propose a novel low resource PCM method.
Direct Gibbs posterior inference on risk minimizers construction, concentration, and calibration ; Realworld problems, often couched as machine learning applications, involve quantities of interest that have realworld meaning, independent of any statistical model. To avoid potential model misspecification bias or overcomplicating the problem formulation, a direct, modelfree approach is desired. The traditional Bayesian framework relies on a model for the datagenerating process so, apparently, the desired direct, modelfree, posteriorprobabilistic inference is out of reach. Fortunately, likelihood functions are not the only means of linking data and quantities of interest. Loss functions provide an alternative link, where the quantity of interest is defined, or at least could be defined, as a minimizer of the corresponding risk, or expected loss. In this case, one can obtain what is commonly referred to as a Gibbs posterior distribution by using the empirical risk function directly. This manuscript explores the Gibbs posterior construction, its asymptotic concentration properties, and the frequentist calibration of its credible regions. By being free from the constraints of model specification, Gibbs posteriors create new opportunities for probabilistic inference in modern statistical learning problems.
Soft Smoothness for Audio Inpainting Using a Latent Matrix Model in Delayembedded Space ; Here, we propose a new reconstruction method of smooth timeseries signals. A key concept of this study is not considering the model in signal space, but in delayembedded space. In other words, we indirectly represent a timeseries signal as an output of inverse delayembedding of a matrix, and the matrix is constrained. Based on the model under inverse delayembedding, we propose to constrain the matrix to be rank1 with smooth factor vectors. The proposed model is closely related to the convolutional model, and quadratic variation QV regularization. Especially, the proposed method can be characterized as a generalization of QV regularization. In addition, we show that the proposed method provides the softer smoothness than QV regularization. Experiments of audio inpainting and declipping are conducted to show its advantages in comparison with several existing interpolation methods and sparse modeling.
Equitable Ability Estimation in Neurodivergent Student Populations with ZeroInflated Learner Models ; At present, the educational data mining community lacks many tools needed for ensuring equitable ability estimation for Neurodivergent ND learners. On one hand, most learner models are susceptible to underestimating ND ability since confounding contexts cannot be held accountable eg consider dyslexia and textheavy assessments, and on the other, few if any existing datasets are suited for appraising model and data bias in ND contexts. In this paper we attempt to model the relationships between context delivery and response types and performance of ND students with zeroinflated learner models. This approach facilitates simulation of several expected ND behavioural traits, provides equitable ability estimates across all student groups from generated datasets, increases interpretability confidence, and can significantly increase the quality of learning opportunities for ND students. Our approach consistently outperforms baselines in our experiments and can also be applied to many other learner modelling frameworks.
A Class of TwoTimescale Stochastic EM Algorithms for Nonconvex Latent Variable Models ; The ExpectationMaximization EM algorithm is a popular choice for learning latent variable models. Variants of the EM have been initially introduced, using incremental updates to scale to large datasets, and using Monte Carlo MC approximations to bypass the intractable conditional expectation of the latent data for most nonconvex models. In this paper, we propose a general class of methods called TwoTimescale EM Methods based on a twostage approach of stochastic updates to tackle an essential nonconvex optimization task for latent variable models. We motivate the choice of a double dynamic by invoking the variance reduction virtue of each stage of the method on both sources of noise the index sampling for the incremental update and the MC approximation. We establish finitetime and global convergence bounds for nonconvex objective functions. Numerical applications on various models such as deformable template for image analysis or nonlinear models for pharmacokinetics are also presented to illustrate our findings.
Bridging Pretrained Language Models and Handcrafted Features for Unsupervised POS Tagging ; In recent years, largescale pretrained language models PLMs have made extraordinary progress in most NLP tasks. But, in the unsupervised POS tagging task, works utilizing PLMs are few and fail to achieve stateoftheart SOTA performance. The recent SOTA performance is yielded by a Guassian HMM variant proposed by He et al. 2018. However, as a generative model, HMM makes very strong independence assumptions, making it very challenging to incorporate contexualized word representations from PLMs. In this work, we for the first time propose a neural conditional random field autoencoder CRFAE model for unsupervised POS tagging. The discriminative encoder of CRFAE can straightforwardly incorporate ELMo word representations. Moreover, inspired by featurerich HMM, we reintroduce handcrafted features into the decoder of CRFAE. Finally, experiments clearly show that our model outperforms previous stateoftheart models by a large margin on Penn Treebank and multilingual Universal Dependencies treebank v2.0.
Meshfree OneFluid Modelling of LiquidVapor Phase Transitions ; We introduce a meshfree collocation framework to model the phase change from liquid to vapor at or above the boiling point. While typical vaporization or boiling simulations focus on the vaporization from the bulk of the fluid, here we include the possibility of vaporization from the free surface, when a moving fluid comes into contact with a superheated surface. We present a continuum, onefluid approach in which the liquid and vapor phases are modeled with the same constitutive equations, with different material properties. The novelty here is a monolithic approach without explicit modeling of the interface between the phases, neither in a sharp nor diffuse sense. Furthermore, no interface boundary conditions or source terms are needed between the liquid and vapor phases. Instead, the phase transition is modeled only using material properties varying with temperature. Towards this end, we also present an enrichment of strong form meshfree generalized finite difference methods GFDM to accurately capture derivatives in the presence of jumps in density, viscosity, and other physical properties. The numerical results show a good agreement with experimental results, and highlight the ability of our proposed framework to model phase changes with large jumps.
Integrating Vectorized Lexical Constraints for Neural Machine Translation ; Lexically constrained neural machine translation NMT, which controls the generation of NMT models with prespecified constraints, is important in many practical scenarios. Due to the representation gap between discrete constraints and continuous vectors in NMT models, most existing works choose to construct synthetic data or modify the decoding algorithm to impose lexical constraints, treating the NMT model as a black box. In this work, we propose to open this black box by directly integrating the constraints into NMT models. Specifically, we vectorize source and target constraints into continuous keys and values, which can be utilized by the attention modules of NMT models. The proposed integration method is based on the assumption that the correspondence between keys and values in attention modules is naturally suitable for modeling constraint pairs. Experimental results show that our method consistently outperforms several representative baselines on four language pairs, demonstrating the superiority of integrating vectorized lexical constraints.
Pseudo Label Is Better Than Human Label ; Stateoftheart automatic speech recognition ASR systems are trained with tens of thousands of hours of labeled speech data. Human transcription is expensive and time consuming. Factors such as the quality and consistency of the transcription can greatly affect the performance of the ASR models trained with these data. In this paper, we show that we can train a strong teacher model to produce high quality pseudo labels by utilizing recent selfsupervised and semisupervised learning techniques. Specifically, we use JUST Joint UnsupervisedSupervised Training and iterative noisy student teacher training to train a 600 million parameter bidirectional teacher model. This model achieved 4.0 word error rate WER on a voice search task, 11.1 relatively better than a baseline. We further show that by using this strong teacher model to generate highquality pseudo labels for training, we can achieve 13.6 relative WER reduction 5.9 to 5.1 for a streaming model compared to using human labels.
Differential Assessment of BlackBox AI Agents ; Much of the research on learning symbolic models of AI agents focuses on agents with stationary models. This assumption fails to hold in settings where the agent's capabilities may change as a result of learning, adaptation, or other postdeployment modifications. Efficient assessment of agents in such settings is critical for learning the true capabilities of an AI system and for ensuring its safe usage. In this work, we propose a novel approach to differentially assess blackbox AI agents that have drifted from their previously known models. As a starting point, we consider the fully observable and deterministic setting. We leverage sparse observations of the drifted agent's current behavior and knowledge of its initial model to generate an active querying policy that selectively queries the agent and computes an updated model of its functionality. Empirical evaluation shows that our approach is much more efficient than relearning the agent model from scratch. We also show that the cost of differential assessment using our method is proportional to the amount of drift in the agent's functionality.
Knowledge Distillation with the Reused Teacher Classifier ; Knowledge distillation aims to compress a powerful yet cumbersome teacher model into a lightweight student model without much sacrifice of performance. For this purpose, various approaches have been proposed over the past few years, generally with elaborately designed knowledge representations, which in turn increase the difficulty of model development and interpretation. In contrast, we empirically show that a simple knowledge distillation technique is enough to significantly narrow down the teacherstudent performance gap. We directly reuse the discriminative classifier from the pretrained teacher model for student inference and train a student encoder through feature alignment with a single ell2 loss. In this way, the student model is able to achieve exactly the same performance as the teacher model provided that their extracted features are perfectly aligned. An additional projector is developed to help the student encoder match with the teacher classifier, which renders our technique applicable to various teacher and student architectures. Extensive experiments demonstrate that our technique achieves stateoftheart results at the modest cost of compression ratio due to the added projector.
A measure model for the spread of viral infections with mutations ; Genetic variations in the COVID19 virus are one of the main causes of the COVID19 pandemic outbreak in 2020 and 2021. In this article, we aim to introduce a new type of model, a system coupled with ordinary differential equations ODEs, and measure differential equation MDE, stemming from the classical SIR model for the variants distribution. Specifically, we model the evolution of susceptible S and removed R populations by ODEs and the infected I population by an MDE comprised of a probability vector field PVF and a source term. In addition, the ODEs for S and R contain terms that are related to the measure I. We establish analytically the wellposedness of the coupled ODEMDE system by using generalized Wasserstein distance. We give two examples to show that the proposed ODEMDE model coincides with the classical SIR model in the case of constant or timedependent
LinkBERT Pretraining Language Models with Document Links ; Language model LM pretraining can learn various knowledge from text corpora, helping downstream tasks. However, existing methods such as BERT model a single document, and do not capture dependencies or knowledge that span across documents. In this work, we propose LinkBERT, an LM pretraining method that leverages links between documents, e.g., hyperlinks. Given a text corpus, we view it as a graph of documents and create LM inputs by placing linked documents in the same context. We then pretrain the LM with two joint selfsupervised objectives masked language modeling and our new proposal, document relation prediction. We show that LinkBERT outperforms BERT on various downstream tasks across two domains the general domain pretrained on Wikipedia with hyperlinks and biomedical domain pretrained on PubMed with citation links. LinkBERT is especially effective for multihop reasoning and fewshot QA 5 absolute improvement on HotpotQA and TriviaQA, and our biomedical LinkBERT sets new states of the art on various BioNLP tasks 7 on BioASQ and USMLE. We release our pretrained models, LinkBERT and BioLinkBERT, as well as code and data at httpsgithub.commichiyasunagaLinkBERT.
WAVPROMPT Towards FewShot Spoken Language Understanding with Frozen Language Models ; Largescale autoregressive language models pretrained on massive text have demonstrated their impressive ability to perform new natural language tasks with only a few text examples, without the need for finetuning. Recent studies further show that such a fewshot learning ability can be extended to the textimage setting by training an encoder to encode the images into embeddings functioning like the text embeddings of the language model. Interested in exploring the possibility of transferring the fewshot learning ability to the audiotext setting, we propose a novel speech understanding framework, WavPrompt, where we finetune a wav2vec model to generate a sequence of audio embeddings understood by the language model. We show that WavPrompt is a fewshot learner that can perform speech understanding tasks better than a naive text baseline. We conduct detailed ablation studies on different components and hyperparameters to empirically identify the best model configuration. In addition, we conduct a nonspeech understanding experiment to show WavPrompt can extract more information than just the transcriptions. Code is available at httpsgithub.comHertinWavPrompt
Calibrating constitutive models with fullfield data via physics informed neural networks ; The calibration of solid constitutive models with fullfield experimental data is a longstanding challenge, especially in materials which undergo large deformation. In this paper, we propose a physicsinformed deeplearning framework for the discovery of constitutive model parameterizations given fullfield displacement data and global forcedisplacement data. Contrary to the majority of recent literature in this field, we work with the weak form of the governing equations rather than the strong form to impose physical constraints upon the neural network predictions. The approach presented in this paper is computationally efficient, suitable for irregular geometric domains, and readily ingests displacement data without the need for interpolation onto a computational grid. A selection of canonical hyperelastic materials models suitable for different material classes is considered including the NeoHookean, Gent, and BlatzKo constitutive models as exemplars for general hyperelastic behavior, polymer behavior with lockup, and compressible foam behavior respectively. We demonstrate that physics informed machine learning is an enabling technology and may shift the paradigm of how fullfield experimental data is utilized to calibrate constitutive models under finite deformations.
SimPO Simultaneous Prediction and Optimization ; Many machine learning ML models are integrated within the context of a larger system as part of a key component for decision making processes. Concretely, predictive models are often employed in estimating the parameters for the input values that are utilized for optimization models as isolated processes. Traditionally, the predictive models are built first, then the model outputs are used to generate decision values separately. However, it is often the case that the prediction values that are trained independently of the optimization process produce suboptimal solutions. In this paper, we propose a formulation for the Simultaneous Prediction and Optimization SimPO framework. This framework introduces the use of a joint weighted loss of a decisiondriven predictive ML model and an optimization objective function, which is optimized endtoend directly through gradientbased methods.
Forcedirected algorithms for schematic drawings and placement A survey ; Forcedirected algorithms have been developed over the last 50 years and used in many application fields, including information visualisation, biological network visualisation, sensor networks, routing algorithms, scheduling, and graph drawing. Our survey provides a comprehensive summary of developments and a full roadmap for stateoftheart forcedirected algorithms in schematic drawings and placement. We classified the model of forcedirected algorithms into classical and hybrid. The classical forcedirected algorithms are further classified as follows a accumulated force models, b energy function minimisation models and c combinatorial optimisation models. The hybrid forcedirected algorithms are classified as follows a parallel and hardware accelerated models, b multilevel forcedirected models and c multidimensional scaling forcedirected algorithms. Five categories of application domains in which forcedirected algorithms have been adopted for schematic drawings and placement are also summarised a aesthetic drawings for general networks, b component placement and scheduling in highlevel synthesis of verylargescale integration circuits design, c information visualisation, d biological network visualisation and e node placement and localisation for sensor networks.
Considerations for Multilingual Wikipedia Research ; English Wikipedia has long been an important data source for much research and natural language machine learning modeling. The growth of nonEnglish language editions of Wikipedia, greater computational resources, and calls for equity in the performance of language and multimodal models have led to the inclusion of many more language editions of Wikipedia in datasets and models. Building better multilingual and multimodal models requires more than just access to expanded datasets; it also requires a better understanding of what is in the data and how this content was generated. This paper seeks to provide some background to help researchers think about what differences might arise between different language editions of Wikipedia and how that might affect their models. It details three major ways in which content differences between language editions arise local context, community and governance, and technology and recommendations for good practices when using multilingual and multimodal data for research and modeling.
Federated Learning from Only Unlabeled Data with ClassConditionalSharing Clients ; Supervised federated learning FL enables multiple clients to share the trained model without sharing their labeled data. However, potential clients might even be reluctant to label their own data, which could limit the applicability of FL in practice. In this paper, we show the possibility of unsupervised FL whose model is still a classifier for predicting class labels, if the classprior probabilities are shifted while the classconditional distributions are shared among the unlabeled data owned by the clients. We propose federation of unsupervised learning FedUL, where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model. FedUL is a very general solution to unsupervised FL it is compatible with many supervised FL methods, and the recovery of the wanted model can be theoretically guaranteed as if the data have been labeled. Experiments on benchmark and realworld datasets demonstrate the effectiveness of FedUL. Code is available at httpsgithub.comlunanbitFedUL.
ParameterEfficient Abstractive Question Answering over Tables or Text ; A longterm ambition of information seeking QA systems is to reason over multimodal contexts and generate natural answers to user queries. Today, memory intensive pretrained language models are adapted to downstream tasks such as QA by finetuning the model on QA data in a specific modality like unstructured text or structured tables. To avoid training such memoryhungry models while utilizing a uniform architecture for each modality, parameterefficient adapters add and train small taskspecific bottleneck layers between transformer layers. In this work, we study parameterefficient abstractive QA in encoderdecoder models over structured tabular data and unstructured textual data using only 1.5 additional parameters for each modality. We also ablate over adapter layers in both encoder and decoder modules to study the efficiencyperformance tradeoff and demonstrate that reducing additional trainable parameters down to 0.71.0 leads to comparable results. Our models outperform current stateoftheart models on tabular QA datasets such as Tablesum and FeTaQA, and achieve comparable performance on a textual QA dataset such as NarrativeQA using significantly less trainable parameters than finetuning.
A General Class of Trimodal Distributions Properties and Inference ; The modality is important topic for modelling. Using parametric models is an efficient way when real data set shows trimodality. In this paper we propose a new class of trimodal probability distributions, that is, probability distributions that have up to three modes. Trimodality itself is achieved by applying a proper transformation to density function of certain continuous probability distributions. At first, we obtain preliminary results for an arbitratry density function gx and, next, we focus on the Gaussian case, studying trimodal Gaussian model more deeply. The Gaussian distribution is applied to produce the trimodal form of Gaussian known as normal distribution. The tractability of analytical expression of normal distribution, and properties of the trimodal normal distribution are important reasons why we choose normal distribution. Furthermore, the existing distributions should be improved to be capable of modelling efficiently when there exists a trimodal form in a data set. After new density function is proposed, estimating its parameters is important. Since Mathematica 12.0 software has optimization tools and important modelling techniques, computational steps are performed by using this software. The bootstrapped form of real data sets are applied to show the modelling ability of the proposed distribution when real data sets show trimodality.
Fair and Argumentative Language Modeling for Computational Argumentation ; Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy. In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models. To this end, we introduce ABBA, a novel resource for bias measurement specifically tailored to argumentation. We employ our resource to assess the effect of argumentative finetuning and debiasing on the intrinsic bias found in transformerbased language models using a lightweight adapterbased approach that is more sustainable and parameterefficient than full finetuning. Finally, we analyze the potential impact of language model debiasing on the performance in argument quality prediction, a downstream task of computational argumentation. Our results show that we are able to successfully and sustainably remove bias in general and argumentative language models while preserving and sometimes improving model performance in downstream tasks. We make all experimental code and data available at httpsgithub.comumanlpFairArgumentativeLM.
Are Multimodal Transformers Robust to Missing Modality ; Multimodal data collected from the real world are often imperfect due to missing modalities. Therefore multimodal models that are robust against modalincomplete data are highly preferred. Recently, Transformer models have shown great success in processing multimodal data. However, existing work has been limited to either architecture designs or pretraining strategies; whether Transformer models are naturally robust against missingmodal data has rarely been investigated. In this paper, we present the firstofitskind work to comprehensively investigate the behavior of Transformers in the presence of modalincomplete data. Unsurprising, we find Transformer models are sensitive to missing modalities while different modal fusion strategies will significantly affect the robustness. What surprised us is that the optimal fusion strategy is dataset dependent even for the same Transformer model; there does not exist a universal strategy that works in general cases. Based on these findings, we propose a principle method to improve the robustness of Transformer models by automatically searching for an optimal fusion strategy regarding input data. Experimental validations on three benchmarks support the superior performance of the proposed method.
Neural Operator with Regularity Structure for Modeling Dynamics Driven by SPDEs ; Stochastic partial differential equations SPDEs are significant tools for modeling dynamics in many areas including atmospheric sciences and physics. Neural Operators, generations of neural networks with capability of learning maps between infinitedimensional spaces, are strong tools for solving parametric PDEs. However, they lack the ability to modeling SPDEs which usually have poor regularity due to the driving noise. As the theory of regularity structure has achieved great successes in analyzing SPDEs and provides the concept model feature vectors that wellapproximate SPDEs' solutions, we propose the Neural Operator with Regularity Structure NORS which incorporates the feature vectors for modeling dynamics driven by SPDEs. We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d stochastic NavierStokes equation, and the results demonstrate that the NORS is resolutioninvariant, efficient, and achieves one order of magnitude lower error with a modest amount of data.
Form Factors in Higgs Couplings from Physics Beyond the Standard Model ; We consider the momentumdependent effects in Higgs couplings generated by physics beyond the standard model. We take a modeldependent approach, in which we can fully compute the nonlocal effects from physics not directly reachable by the LHC energy. We consider several scenarios, including composite Higgs models, additional scalars, and the continuum contributions of a quasiconformal sector, as examples. For each specific model, we are able to obtain the form factor, with which it is then possible to fully simulate the effects in kinematics distributions. The momentumdependent effects appear as a consequence of offshellness in the process. We show how the sensitivity of different channels to the various models depends on how the flow of offshellness appears in the Higgs couplings.
Secondorder accuracy metrics for scoring models and their practical use ; The paper proposes new secondorder accuracy metrics for scoring or rating models, which show the target preference of the model, it is better to diagnose good objects or better to diagnose bad ones for a constant generally accepted predictive power determined by the first order metric that is known as the Gini index. There are two metrics, they have both an integral representation and a numerical one. The numerical representation of metrics is of two types, the first of which is based on binary events to evaluate the model, the second on the default probability given by the model. Comparison of the results of calculating the metrics allows you to validate the calibration settings of the scoring or rating model and reveals its distortions. The article provides examples of calculating secondorder accuracy metrics for ratings of several rating agencies, as well as for the well known approach to calibration based on van der Burg's ROC curves.
A Variational Autoencoder for Heterogeneous Temporal and Longitudinal Data ; The variational autoencoder VAE is a popular deep latent variable model used to analyse highdimensional datasets by learning a lowdimensional latent representation of the data. It simultaneously learns a generative model and an inference network to perform approximate posterior inference. Recently proposed extensions to VAEs that can handle temporal and longitudinal data have applications in healthcare, behavioural modelling, and predictive maintenance. However, these extensions do not account for heterogeneous data i.e., data comprising of continuous and discrete attributes, which is common in many reallife applications. In this work, we propose the heterogeneous longitudinal VAE HLVAE that extends the existing temporal and longitudinal VAEs to heterogeneous data. HLVAE provides efficient inference for highdimensional datasets and includes likelihood models for continuous, count, categorical, and ordinal data while accounting for missing observations. We demonstrate our model's efficacy through simulated as well as clinical datasets, and show that our proposed model achieves competitive performance in missing value imputation and predictive accuracy.
Model Reduction via Dynamic Mode Decomposition ; This work proposes a new framework of model reduction for parametric complex systems. The framework employs a popular model reduction technique dynamic mode decomposition DMD, which is capable of combining datadriven learning and physics ingredients based on the Koopman operator theory. In the offline step of the proposed framework, DMD constructs a lowrank linear surrogate model for the high dimensional quantities of interest QoIs derived from the nonlinear complex high fidelity models HFMs of unknown forms. Then in the online step, the resulting local reduced order bases ROBs and parametric reduced order models PROMs at the training parameter sample points are interpolated to construct a new PROM with the corresponding ROB for a new set of targettest parameter values. The interpolations need to be done on the appropriate manifolds within consistent sets of generalized coordinates. The proposed framework is illustrated by numerical examples for both linear and nonlinear problems. In particular, its advantages in computational costs and accuracy are demonstrated by the comparisons with projectionbased proper orthogonal decomposition PODPROM and Kriging.
A majorizationminimization algorithm for nonnegative binary matrix factorization ; This paper tackles the problem of decomposing binary data using matrix factorization. We consider the family of meanparametrized Bernoulli models, a class of generative models that are well suited for modeling binary data and enables interpretability of the factors. We factorize the Bernoulli parameter and consider an additional Beta prior on one of the factors to further improve the model's expressive power. While similar models have been proposed in the literature, they only exploit the Beta prior as a proxy to ensure a valid Bernoulli parameter in a Bayesian setting; in practice it reduces to a uniform or uninformative prior. Besides, estimation in these models has focused on costly Bayesian inference. In this paper, we propose a simple yet very efficient majorizationminimization algorithm for maximum a posteriori estimation. Our approach leverages the Beta prior whose parameters can be tuned to improve performance in matrix completion tasks. Experiments conducted on three public binary datasets show that our approach offers an excellent tradeoff between prediction performance, computational complexity, and interpretability.
Bunching instability and asymptotic properties in epitaxial growth with elasticity effects continuum model ; We study the continuum epitaxial model for elastic interacting atomic steps on vicinal surfaces proposed by Xiang and E Xiang, SIAM J. Appl. Math. 63241258, 2002; Xiang and E, Phys. Rev. B 69035409, 2004. The nonlocal term and the singularity complicate the analysis of its PDE. In this paper, we first generalize this model to the LennardJones m,n interaction between steps. Based on several important formulations of the nonlocal energy, we prove the existence, symmetry, unimodality, and regularity of the energy minimizer in the periodic setting. In particular, the symmetry and unimodality of the minimizer implies that it has a bunching profile. Furthermore, we derive the minimum energy scaling law for the original continnum model. All results are consistent with the corresponding results proved for discrete models by Luo et al. Luo et al., Multiscale Model. Simul. 14737 771, 2016.
Physical Modeling using Recurrent Neural Networks with Fast Convolutional Layers ; Discretetime modeling of acoustic, mechanical and electrical systems is a prominent topic in the musical signal processing literature. Such models are mostly derived by discretizing a mathematical model, given in terms of ordinary or partial differential equations, using established techniques. Recent work has applied the techniques of machinelearning to construct such models automatically from data for the case of systems which have lumped states described by scalar values, such as electrical circuits. In this work, we examine how similar techniques are able to construct models of systems which have spatially distributed rather than lumped states. We describe several novel recurrent neural network structures, and show how they can be thought of as an extension of modal techniques. As a proof of concept, we generate synthetic data for three physical systems and show that the proposed network structures can be trained with this data to reproduce the behavior of these systems.
ContextAware Language Modeling for GoalOriented Dialogue Systems ; Goaloriented dialogue systems face a tradeoff between fluent language generation and taskspecific control. While supervised learning with large language models is capable of producing realistic text, how to steer such responses towards completing a specific task without sacrificing language quality remains an open question. In this work, we formulate goaloriented dialogue as a partially observed Markov decision process, interpreting the language model as a representation of both the dynamics and the policy. This view allows us to extend techniques from learningbased control, such as task relabeling, to derive a simple and effective method to finetune language models in a goalaware way, leading to significantly improved task performance. We additionally introduce a number of training strategies that serve to better focus the model on the task at hand. We evaluate our method, ContextAware Language Models CALM, on a practical flightbooking task using AirDialogue. Empirically, CALM outperforms the stateoftheart method by 7 in terms of task success, matching humanlevel task performance.
Robust estimation and model diagnostic of insurance loss data a weighted likelihood approach ; This paper presents a scorebased weighted likelihood estimator SWLE for robust estimations of generalized linear model GLM for insurance loss data. The SWLE exhibits a limited sensitivity to the outliers, theoretically justifying its robustness against model contaminations. Also, with the specially designed weight function to effectively diminish the contributions of extreme losses to the GLM parameter estimations, most statistical quantities can still be derived analytically, minimizing the computational burden for parameter calibrations. Apart from robust estimations, the SWLE can also act as a quantitative diagnostic tool to detect outliers and systematic model misspecifications. Motivated by the coverage modifications which make insurance losses often random censored and truncated, the SWLE is extended to accommodate censored and truncated data. We exemplify the SWLE on three simulation studies and two real insurance datasets. Empirical results suggest that the SWLE produces more reliable parameter estimates than the MLE if outliers contaminate the dataset. The SWLE diagnostic tool also successfully detects any systematic model misspecifications with high power, accompanying some potential model improvements.
Federated Learning with Lossy Distributed Source Coding Analysis and Optimization ; Recently, federated learning FL, which replaces data sharing with model sharing, has emerged as an efficient and privacyfriendly machine learning ML paradigm. One of the main challenges in FL is the huge communication cost for model aggregation. Many compressionquantization schemes have been proposed to reduce the communication cost for model aggregation. However, the following question remains unanswered What is the fundamental tradeoff between the communication cost and the FL convergence performance In this paper, we manage to answer this question. Specifically, we first put forth a general framework for model aggregation performance analysis based on the ratedistortion theory. Under the proposed analysis framework, we derive an inner bound of the ratedistortion region of model aggregation. We then conduct an FL convergence analysis to connect the aggregation distortion and the FL convergence performance. We formulate an aggregation distortion minimization problem to improve the FL convergence performance. Two algorithms are developed to solve the above problem. Numerical results on aggregation distortion, convergence performance, and communication cost demonstrate that the baseline model aggregation schemes still have great potential for further improvement.
Wide sampling and efficient updating Monte Carlo algorithms for dimer models ; Quantum dimer model is a lowenergy and efficient model to study quantum spin systems and strongcorrelated physics. As a foreseeing step and without loss of generality, we study the classical dimers on square lattice by means of Monte Carlo method. For efficient states updating in dimer model, we introduce a highlyefficient loop updating algorithm directed by energy criterion called energy directed loop algorithm and improve the pocket algorithm to compare them with the traditional directed loop algorithm. By comparisons, our energy directed loop algorithm increases the convergent speed of Monte Carlo and shorten the autocorrelated time in classical hardcore dimer model. Both the improved pocket algorithm and energy path algorithm can be used in varietal dimer models and succeed in traversing the topological sections rapidly.
Eternal vs singular observers in interacting dark energydark matter models ; Interacting dark energydark matter models have been widely analyzed in the literature in an attempt to find traces of new physics beyond the usual cosmological LambdaCDM models. Such a coupling between both dark components is usually introduced in a phenomenological way through a flux in the continuity equation. However, models with a Lagrangian formulation are also possible. A class of the latter assumes a conformaldisformal coupling that leads to a fifth force on the dark matter component, which consequently does not follow the same geodesics as the other baryonic, radiation, and dark energy matter sources. Here we analyze how the usual cosmological singularities of the standard matter frame are seen from the dark matter one, concluding that by choosing an appropriate coupling, dark matter observers will see no singularities but a nonbeginning, nonending universe. By considering two simple phenomenological models we show that such a type of coupling can fit observational data as well as the usual LambdaCDM model.
Studying First Passage Problems using Neural Networks A Case Study in the SlitWell Microfluidic Device ; This study presents deep neural network solutions to a timeintegrated Smoluchowski equation modeling the mean first passage time of nanoparticles traversing the slitwell microfluidic device. This physical scenario is representative of a broader class of parameterized first passage problems in which key output metrics are dictated by a complicated interplay of problem parameters and system geometry. Specifically, whereas these types of problems are commonly studied using particle simulations of stochastic differential equation models, here the corresponding partial differential equation model is solved using a method based on deep neural networks. The results illustrate that the neural network method is synergistic with the timeintegrated Smoluchowski model together, these are used to construct continuous mappings from key physical inputs applied voltage and particle diameter to key output metrics mean first passage time and effective mobility. In particular, this capability is a unique advantage of the timeintegrated Smoluchowski model over the corresponding stochastic differential equation models. Furthermore, the neural network method is demonstrated to easily and reliably handle geometrymodifying parameters, which is generally difficult to accomplish using other methods.
Semiparametric transformation Model with measurement error in Covariates An Instrumental variable approach ; Linear transformation model provides a general framework for analyzing censored survival data with covariates. The proportional hazards and proportional odds models are special cases of the linear transformation model. In biomedical studies, covariates with measurement error may occur in survival data. In this work, we propose a method to obtain estimators of the regression coefficients in the linear transformation model when the covariates are subject to measurement error. In the proposed method, we assume that instrumental variables are available. We develop counting process based estimating equations for finding the estimators of regression coefficients. We prove the large sample properties of the estimators using the martingale representation of the regression estimators. The finite sample performance of the estimators are evaluated through an extensive Monte Carlo simulation study. Finally, we illustrate the proposed method using an AIDS clinical trial ACTG 175 data.
Curriculum Learning for Dense Retrieval Distillation ; Recent work has shown that more effective dense retrieval models can be obtained by distilling ranking knowledge from an existing base reranking model. In this paper, we propose a generic curriculum learning based optimization framework called CLDRD that controls the difficulty level of training data produced by the reranking teacher model. CLDRD iteratively optimizes the dense retrieval student model by increasing the difficulty of the knowledge distillation data made available to it. In more detail, we initially provide the student model coarsegrained preference pairs between documents in the teacher's ranking and progressively move towards finergrained pairwise document ordering requirements. In our experiments, we apply a simple implementation of the CLDRD framework to enhance two stateoftheart dense retrieval models. Experiments on three public passage retrieval datasets demonstrate the effectiveness of our proposed framework.
Stability analysis of stochastic secondorder macroscopic continuum models and numerical simulations ; Secondorder macroscopic continuum models have been constantly improving for decades to reproduce the empirical observations. Recently, a series of experimental studies have suggested that the stochastic factors contribute significantly to destabilizing traffic flow. Nevertheless, the traffic flow stability of the stochastic secondorder macroscopic continuum model hasn't received the attention it deserves in past studies. More importantly, we have found that the destabilizing aspect of stochasticity is still not correctly validated in the existing theoretical stability analysis. In this paper, we analytically study the impact of stochasticity on traffic flow stability for a general stochastic secondorder macroscopic model by using the direct Lyapunov method. Numerical simulations have been carried out for different typical stochastic secondorder macroscopic models. Our analytical stability analysis has been validated, and our methodology has been proved more efficient. Our study has theoretically revealed that the presence of stochasticity has a destabilizing effect in stochastic macroscopic models.
Derivation and analysis of a phase field crystal model for a mixture of active and passive particles ; We discuss an active phase field crystal PFC model that describes a mixture of active and passive particles. First, a microscopic derivation from dynamical density functional theory DDFT is presented that includes a systematic treatment of the relevant orientational degrees of freedom. Of particular interest is the construction of the nonlinear and coupling terms. This allows for interesting insights into the microscopic justification of phenomenological constructions used in PFC models for active particles and mixtures, the approximations required for obtaining them, and possible generalizations. Second, the derived model is investigated using linear stability analysis and nonlinear methods. It is found that the model allows for a rich nonlinear behavior with states ranging from steady periodic and localized states to various timeperiodic states. The latter include standing, traveling, and modulated waves corresponding to spatially periodic and localized traveling, wiggling, and alternating peak patterns and their combinations.
ExSum From Local Explanations to Model Understanding ; Interpretability methods are developed to understand the working mechanisms of blackbox models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary ExSum, a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual minimality and plausibility.
Dynamic and ContextDependent Stock Price Prediction Using Attention Modules and News Sentiment ; The growth of machinereadable data in finance, such as alternative data, requires new modeling techniques that can handle nonstationary and nonparametric data. Due to the underlying causal dependence and the size and complexity of the data, we propose a new modeling approach for financial time series data, the alphatRIM recurrent independent mechanism. This architecture makes use of keyvalue attention to integrate topdown and bottomup information in a contextdependent and dynamic way. To model the data in such a dynamic manner, the alphatRIM utilizes an exponentially smoothed recurrent neural network, which can model nonstationary times series data, combined with a modular and independent recurrent structure. We apply our approach to the closing prices of three selected stocks of the SP 500 universe as well as their news sentiment score. The results suggest that the alphatRIM is capable of reflecting the causal structure between stock prices and news sentiment, as well as the seasonality and trends. Consequently, this modeling approach markedly improves the generalization performance, that is, the prediction of unseen data, and outperforms stateoftheart networks such as long shortterm memory models.
Cosmology in fR,Lm gravity ; In this letter, we investigate the cosmic expansion scenario of the universe in the framework of fR,Lm gravity theory. We consider a nonlinear fR,Lm model, specifically, fR,LmfracR2Lmn beta, where n and beta are free model parameters. Then we derive the motion equations for flat FLRW universe and obtain the exact solution of corresponding field equations. Then we estimate the best fit ranges of model parameters by using updated Hz datasets consisting of 57 points and the Pantheon datasets consisting of 1048 points. Further we investigate the physical behavior of density and the deceleration parameter. The evolution of deceleration parameter depicts a transition from deceleration to acceleration phases of the universe. Moreover, we analyze the stability of the solution of our cosmological model under the observational constraint by considering a linear perturbation. Lastlty, we investigate the behavior of Om diagnostic parameter and we observe that our model shows quintessence type behavior. We conclude that our fR,Lm cosmological model agrees with the recent observational studies and can efficiently describe the late time cosmic acceleration.
NonDebye impedance and relaxation models for dissipative electrochemical capacitors ; Electrochemical capacitors are a class of energy devices in which complex mechanisms of accumulation and dissipation of electric energy take place when connected to a charging or discharging power system. Reliably modeling their frequencydomain and timedomain behaviors is crucial for their proper design and integration in engineering applications, knowing that electrochemical capacitors in general exhibit anomalous tendency that cannot be adequately captured with traditional integerorderbased models. In this study we first review some of the widely used fractionaloder models for the description of impedance and relaxation functions of dissipative resistivecapacitive system, namely the ColeCole, DavidsonCole, and HavriliakNegami models. We then propose and derive new qdeformed models based on modified evolution equations for the charge or voltage when the device is discharged into a parallel resistive load. We verify our results on anomalous spectral impedance response and timedomain relaxation data for voltage and charge obtained from a commercial supercapacitor.
Testing gravity with the cosmic microwave background constraints on modified gravity with two tensorial degrees of freedom ; We provide a cosmological test of modified gravity with two tensorial degrees of freedom and no extra propagating scalar mode. The theory of gravity we consider admits a cosmological model that is indistinguishable from the LambdaCDM model at the level of the background evolution. The model has a single modifiedgravity parameter beta, the effect of which can be seen in linear perturbations, though no extra scalar mode is propagating. Using the Boltzmann code modified to incorporate the present model, we derive the constraints 0.047 beta 0.028 at 68 confidence from Planck CMB data. Since our modified gravity model can hardly be constrained by the Solar System tests and gravitationalwave propagation, our result offers the first observational test on the model.
Dataaided Underwater Acoustic Ray Propagation Modeling ; Acoustic propagation models are widely used in numerous oceanic and other underwater applications. Most conventional models are approximate solutions of the acoustic wave equation, and require accurate environmental knowledge to be available beforehand. Environmental parameters may not always be easily or accurately measurable. While datadriven techniques might allow us to model acoustic propagation without the need for extensive prior environmental knowledge, such techniques tend to be datahungry and often infeasible in oceanic applications where data collection is difficult and expensive. We propose a dataaided ray physics based high frequency acoustic propagation modeling approach that enables us to train models with only a small amount of data. The proposed framework is not only dataefficient, but also offers flexibility to incorporate varying degrees of environmental knowledge, and generalizes well to permit extrapolation beyond the area where data was collected. We demonstrate the feasibility and applicability of our method through four numerical case studies, and one controlled experiment. We also benchmark our method's performance against classical datadriven techniques.
Reliable Offline Modelbased Optimization for Industrial Process Control ; In the research area of offline modelbased optimization, novel and promising methods are frequently developed. However, implementing such methods in realworld industrial systems such as production lines for process control is oftentimes a frustrating process. In this work, we address two important problems to extend the current success of offline modelbased optimization to industrial process control problems 1 how to learn a reliable dynamics model from offline data for industrial processes 2 how to learn a reliable but not overconservative control policy from offline data by utilizing existing modelbased optimization algorithms Specifically, we propose a dynamics model based on ensemble of conditional generative adversarial networks to achieve accurate reward calculation in industrial scenarios. Furthermore, we propose an epistemicuncertaintypenalized reward evaluation function which can effectively avoid giving overestimated rewards to outofdistribution inputs during the learningsearching of the optimal control policy. We provide extensive experiments with the proposed method on two representative cases a discrete control case and a continuous control case, showing that our method compares favorably to several baselines in offline policy learning for industrial process control.
BayesMix Bayesian Mixture Models in C ; We describe BayesMix, a C library for MCMC posterior simulation for general Bayesian mixture models. The goal of BayesMix is to provide a selfcontained ecosystem to perform inference for mixture models to computer scientists, statisticians and practitioners. The key idea of this library is extensibility, as we wish the users to easily adapt our software to their specific Bayesian mixture models. In addition to the several models and MCMC algorithms for posterior inference included in the library, new users with little familiarity on mixture models and the related MCMC algorithms can extend our library with minimal coding effort. Our library is computationally very efficient when compared to competitor software. Examples show that the typical code runtimes are from two to 25 times faster than competitors for data dimension from one to ten. Our library is publicly available on Github at httpsgithub.combayesmixdevbayesmix.
MulT An EndtoEnd Multitask Learning Transformer ; We propose an endtoend Multitask Learning Transformer framework, named MulT, to simultaneously learn multiple highlevel vision tasks, including depth estimation, semantic segmentation, reshading, surface normal estimation, 2D keypoint detection, and edge detection. Based on the Swin transformer model, our framework encodes the input image into a shared representation and makes predictions for each vision task using taskspecific transformerbased decoder heads. At the heart of our approach is a shared attention mechanism modeling the dependencies across the tasks. We evaluate our model on several multitask benchmarks, showing that our MulT framework outperforms both the stateofthe art multitask convolutional neural network models and all the respective single task transformer models. Our experiments further highlight the benefits of sharing attention across all the tasks, and demonstrate that our MulT model is robust and generalizes well to new domains. Our project website is at httpsivrl.github.ioMulT.
Transformers as Neural Augmentors Class Conditional Sentence Generation via Variational Bayes ; Data augmentation methods for Natural Language Processing tasks are explored in recent years, however they are limited and it is hard to capture the diversity on sentence level. Besides, it is not always possible to perform data augmentation on supervised tasks. To address those problems, we propose a neural data augmentation method, which is a combination of Conditional Variational Autoencoder and encoderdecoder Transformer model. While encoding and decoding the input sentence, our model captures the syntactic and semantic representation of the input language with its class condition. Following the developments in the past years on pretrained language models, we train and evaluate our models on several benchmarks to strengthen the downstream tasks. We compare our method with 3 different augmentation techniques. The presented results show that, our model increases the performance of current models compared to other data augmentation techniques with a small amount of computation power.
DeepStruct Pretraining of Language Models for Structure Prediction ; We introduce a method for improving the structural understanding abilities of language models. Unlike previous approaches that finetune the models with taskspecific augmentation, we pretrain language models on a collection of taskagnostic corpora to generate structures from text. Our structure pretraining enables zeroshot transfer of the learned knowledge that models have about the structure tasks. We study the performance of this approach on 28 datasets, spanning 10 structure prediction tasks including open information extraction, joint entity and relation extraction, named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, factual probe, intent detection, and dialogue state tracking. We further enhance the pretraining with the taskspecific training sets. We show that a 10B parameter language model transfers nontrivially to most tasks and obtains stateoftheart performance on 21 of 28 datasets that we evaluate.
Explaining Causal Models with Argumentation the Case of Bivariate Reinforcement ; Causal models are playing an increasingly important role in machine learning, particularly in the realm of explainable AI. We introduce a conceptualisation for generating argumentation frameworks AFs from causal models for the purpose of forging explanations for the models' outputs. The conceptualisation is based on reinterpreting desirable properties of semantics of AFs as explanation moulds, which are means for characterising the relations in the causal model argumentatively. We demonstrate our methodology by reinterpreting the property of bivariate reinforcement as an explanation mould to forge bipolar AFs as explanations for the outputs of causal models. We perform a theoretical evaluation of these argumentative explanations, examining whether they satisfy a range of desirable explanatory and argumentative properties.
Improving Shape Awareness and Interpretability in Deep Networks Using Geometric Moments ; Deep networks for image classification often rely more on texture information than object shape. While efforts have been made to make deepmodels shapeaware, it is often difficult to make such models simple, interpretable, or rooted in known mathematical definitions of shape. This paper presents a deeplearning model inspired by geometric moments, a classically well understood approach to measure shaperelated properties. The proposed method consists of a trainable network for generating coordinate bases and affine parameters for making the features geometrically invariant yet in a taskspecific manner. The proposed model improves the final feature's interpretation. We demonstrate the effectiveness of our method on standard image classification datasets. The proposed model achieves higher classification performance compared to the baseline and standard ResNet models while substantially improving interpretability.
Internal ModelBased Online Optimization ; In this paper we propose a modelbased approach to the design of online optimization algorithms, with the goal of improving the tracking of the solution trajectory trajectories w.r.t. stateoftheart methods. We focus first on quadratic problems with a timevarying linear term, and use digital control tools a robust internal model principle to propose a novel online algorithm that can achieve zero tracking error by modeling the cost with a dynamical system. We prove the convergence of the algorithm for both strongly convex and convex problems. We further discuss the sensitivity of the proposed method to model uncertainties and quantify its performance. We discuss how the proposed algorithm can be applied to general nonquadratic problems using an approximate model of the cost, and analyze the convergence leveraging the small gain theorem. We present numerical results that showcase the superior performance of the proposed algorithms over previous methods for both quadratic and nonquadratic problems.
Cosmic dynamics and qualitative study of Rastall model with spatial curvature ; We investigate the cosmic dynamics of Rastall gravity in nonflat FriedmannRobertsonWalker FRW spacetime with barotropic fluid. In this context, we are concerned about the class of model satisfying the affine equation of state. We derive the autonomous system for the Rastall model with barotropic fluid. We apply the derived system to investigate the critical points for expanding and bouncing cosmologies and their stable nature and cosmological properties. The expanding cosmology yields de Sitter universe at late times with decelerating past having matter and radiation dominated phases. Investigation of autonomous system for bouncing cosmology yields oscillating solutions in positive spatial curvature region. We also investigate the consequences of settingup a model of nonsingular universe by using energy conditions. The distinct features of the Rastall model compared to the standard model have been discussed in detail.
Learning Locality and Isotropy in Dialogue Modeling ; Existing dialogue modeling methods have achieved promising performance on various dialogue tasks with the aid of Transformer and the largescale pretrained language models. However, some recent studies revealed that the context representations produced by these methods suffer the problem of anisotropy. In this paper, we find that the generated representations are also not conversational, losing the conversation structure information during the context modeling stage. To this end, we identify two properties in dialogue modeling, i.e., locality and isotropy, and present a simple method for dialogue representation calibration, namely SimDRC, to build isotropic and conversational feature spaces. Experimental results show that our approach significantly outperforms the current stateoftheart models on three dialogue tasks across the automatic and human evaluation metrics. More indepth analyses further confirm the effectiveness of our proposed approach.
MetaSSD MetaLearned SelfSupervised Detection ; Deep learningbased symbol detector gains increasing attention due to the simple algorithm design than the traditional modelbased algorithms such as Viterbi and BCJR. The supervised learning framework is often employed to predict the input symbols, where training symbols are used to train the model. There are two major limitations in the supervised approaches a a model needs to be retrained from scratch when new train symbols come to adapt to a new channel status, and b the length of the training symbols needs to be longer than a certain threshold to make the model generalize well on unseen symbols. To overcome these challenges, we propose a metalearningbased selfsupervised symbol detector named MetaSSD. Our contribution is twofold a metalearning helps the model adapt to a new channel environment based on experience with various metatraining environments, and b selfsupervised learning helps the model to use relatively less supervision than the previously suggested learningbased detectors. In experiments, MetaSSD outperforms OFDMMMSE with noisy channel information and shows comparable results with BCJR. Further ablation studies show the necessity of each component in our framework.
Fluid model of a black holestring transition ; A fluid model of selfgravitating strings is proposed. It is expected that black holes turn into strings around the end of black hole evaporation. The transition will occur near the Hagedorn temperature. After the transition, strings would form a bound state by the selfgravitation. Horowitz and Polchinski formulated a model of selfgravitating strings by using winding strings wrapping on the Euclidean time circle arXivhepth9707170. In this paper, we first show that winding strings in the HorowitzPolchinski model approximately behave as a perfect fluid. Then, we solve the Einstein equation for the fluid of winding strings. Our solution reproduces behaviors of the selfgravitating string solution in the HorowitzPolchinski model near the Hagedorn temperature, while it approaches the Schwarzschild black hole at low temperatures. Thus, our fluid model of selfgravitating strings gives a description of the transition between black holes and strings.
Know Your Boundaries The Necessity of Explicit Behavioral Cloning in Offline RL ; We introduce an offline reinforcement learning RL algorithm that explicitly clones a behavior policy to constrain value learning. In offline RL, it is often important to prevent a policy from selecting unobserved actions, since the consequence of these actions cannot be presumed without additional information about the environment. One straightforward way to implement such a constraint is to explicitly model a given data distribution via behavior cloning and directly force a policy not to select uncertain actions. However, many offline RL methods instantiate the constraint indirectly for example, pessimistic value estimation due to a concern about errors when modeling a potentially complex behavior policy. In this work, we argue that it is not only viable but beneficial to explicitly model the behavior policy for offline RL because the constraint can be realized in a stable way with the trained model. We first suggest a theoretical framework that allows us to incorporate behaviorcloned models into valuebased offline RL methods, enjoying the strength of both explicit behavior cloning and value learning. Then, we propose a practical method utilizing a scorebased generative model for behavior cloning. With the proposed method, we show stateoftheart performance on several datasets within the D4RL and Robomimic benchmarks and achieve competitive performance across all datasets tested.
Visual Clues Bridging Vision and Language Foundations for Image Paragraph Captioning ; People say, A picture is worth a thousand words. Then how can we get the rich information out of the image We argue that by using visual clues to bridge large pretrained vision foundation models and language models, we can do so without any extra crossmodal training. Thanks to the strong zeroshot capability of foundation models, we start by constructing a rich semantic representation of the image e.g., image tags, object attributes locations, captions as a structured textual prompt, called visual clues, using a vision foundation model. Based on visual clues, we use large language model to produce a series of comprehensive descriptions for the visual content, which is then verified by the vision model again to select the candidate that aligns best with the image. We evaluate the quality of generated descriptions by quantitative and qualitative measurement. The results demonstrate the effectiveness of such a structured semantic representation.
DiSparse Disentangled Sparsification for Multitask Model Compression ; Despite the popularity of Model Compression and Multitask Learning, how to effectively compress a multitask model has been less thoroughly analyzed due to the challenging entanglement of tasks in the parameter space. In this paper, we propose DiSparse, a simple, effective, and firstofitskind multitask pruning and sparse training scheme. We consider each task independently by disentangling the importance measurement and take the unanimous decisions among all tasks when performing parameter pruning and selection. Our experimental results demonstrate superior performance on various configurations and settings compared to popular sparse training and pruning methods. Besides the effectiveness in compression, DiSparse also provides a powerful tool to the multitask learning community. Surprisingly, we even observed better performance than some dedicated multitask learning methods in several cases despite the high model sparsity enforced by DiSparse. We analyzed the pruning masks generated with DiSparse and observed strikingly similar sparse network architecture identified by each task even before the training starts. We also observe the existence of a watershed layer where the task relatedness sharply drops, implying no benefits in continued parameters sharing. Our code and models will be available at httpsgithub.comSHILabsDiSparseMultitaskModelCompression.
A simple quantum model linked to a theory of decisions ; This article may be seen as a summary and a final discussion of the work that the author has done in recent years on the foundation of quantum theory. It is shown that quantum mechanics as a model follows under certain specific conditions from a quite different, much simpler model. This model is connected to the mind of an observer, or to the joint minds of a group of communicating observers. The model is based upon conceptual variables, and an important aspect is that an observer a group of observers must decide on which variable to measure. The model is then linked more generally to a theory of decisions. The results are discussed from several angles. In particular, macroscopic consequences are treated briefly.
Emergence of quantum spin frustration in spin12 IsingHeisenberg model on a decorated honeycomb lattice ; We study the spin12 IsingXXZ model on a decorated honeycomb lattice composed of five spins per unit cell, one Ising spin, and four Heisenberg spins. This model involving the Heisenberg exchange interaction is one of the few models that can be exactly solvable through the generalized startriangle transformation. The significance of this model is its close relationship to the fully decorated quantum Heisenberg honeycomb lattice since 45 of the particles are Heisenberg spins. We investigate the phase diagram at zero temperature and identify a relevant quantum spin frustrated phase resulting from the contribution of quantum Heisenberg exchange interaction. We obtain an exact residual entropy for the quantum spin frustrated phase, which coincides with the residual entropy of the antiferromagnetic spin12 Ising model on a triangular lattice. We also thoroughly explore its thermodynamic properties, focusing mainly on the frustrated region such as entropy, specific heat, spontaneous magnetization, and critical temperature under several conditions.
A Versatile PseudoRigid Body Modeling Method ; A novel semianalytical method is proposed to develop the pseudorigidbodyPRB model of robots made of highly flexible members HFM, such as flexures and continuum robots, with no limit on the degrees of freedom of the PRB model. The proposed method has a simple formulation yet high precision. Furthermore, it can describe HFMs with variable curvature and stiffness along their length. The method offers a semianalytical solution for the highly coupled nonlinear constrained optimization problem of PRB modeling and can be extended to variablelength robots comprised of HFM, such as catheter and concentric tube robots. We also show that this method can obtain a PRB model of uniformly stiff HFMs, with only three parameters. The versatility of the method is investigated in various applications of HFM in continuum robots. Simulations demonstrate substantial improvement in the precision of the PRB model in general and a reduction in the complexity of the formulation.
HICEM A HighCoverage Emotion Model for Artificial Emotional Intelligence ; As social robots and other intelligent machines enter the home, artificial emotional intelligence AEI is taking center stage to address users' desire for deeper, more meaningful humanmachine interaction. To accomplish such efficacious interaction, the nextgeneration AEI need comprehensive human emotion models for training. Unlike theory of emotion, which has been the historical focus in psychology, emotion models are a descriptive tools. In practice, the strongest models need robust coverage, which means defining the smallest core set of emotions from which all others can be derived. To achieve the desired coverage, we turn to word embeddings from natural language processing. Using unsupervised clustering techniques, our experiments show that with as few as 15 discrete emotion categories, we can provide maximum coverage across six major languagesArabic, Chinese, English, French, Spanish, and Russian. In support of our findings, we also examine annotations from two largescale emotion recognition datasets to assess the validity of existing emotion models compared to human perception at scale. Because robust, comprehensive emotion models are foundational for developing realworld affective computing applications, this work has broad implications in social robotics, humanmachine interaction, mental healthcare, and computational psychology.
Business Process Model for Interoperability Improvement in the Agricultural Domain Using Digital Twins ; A farm generates a lot of data from various systems, which is then stored in a distributed manner, usually in nonstandardized formats, which bears the risk of data inconsistencies. This work addresses this issue by using business process management BPM to demonstrate that the use of digital twins DTs can improve interoperability between services in the agriculture domain. Steps from the BPM lifecycle were applied to a farming use case in Germany. First, the asis business process model was discovered and modeled without DTs, analyzed and then redesigned into the tobe model according to the DT integration. The tobe model showed a reduction in the number of tasks needed to be performed by the farmer as well as an improvement of process data quality, interoperability, and efficiency. Finally, a comparison of the' average processing times of both models with the help of process simulation revealed improvements in the tobe process.
Nonparametric Multishape Modeling with Uncertainty Quantification ; The modeling and uncertainty quantification of closed curves is an important problem in the field of shape analysis, and can have significant ramifications for subsequent statistical tasks. Many of these tasks involve collections of closed curves, which often exhibit structural similarities at multiple levels. Modeling multiple closed curves in a way that efficiently incorporates such betweencurve dependence remains a challenging problem. In this work, we propose and investigate a multipleoutput a.k.a. multioutput, multidimensional Gaussian process modeling framework. We illustrate the proposed methodological advances, and demonstrate the utility of meaningful uncertainty quantification, on several curve and shaperelated tasks. This modelbased approach not only addresses the problem of inference on closed curves and their shapes with kernel constructions, but also opens doors to nonparametric modeling of multilevel dependence for functional objects in general.
Diagnostic Tool for OutofSample Model Evaluation ; Assessment of model fitness is a key part of machine learning. The standard paradigm is to learn models by minimizing a chosen loss function averaged over training data, with the aim of achieving small losses on future data. In this paper, we consider the use of a finite calibration data set to characterize the future, outofsample losses of a model. We propose a simple model diagnostic tool that provides finitesample guarantees under weak assumptions. The tool is simple to compute and to interpret. Several numerical experiments are presented to show how the proposed method quantifies the impact of distribution shifts, aids the analysis of regression, and enables model selection as well as hyperparameter tuning.
Investigating the Benefits of FreeForm Rationales ; Freeform rationales aim to aid model interpretability by supplying the background knowledge that can help understand model decisions. Crowdsourced rationales are provided for commonsense QA instances in popular datasets such as CoSE and ECQA, but their utility remains underinvestigated. We present human studies which show that ECQA rationales indeed provide additional background information to understand a decision, while over 88 of CoSE rationales do not. Inspired by this finding, we ask can the additional context provided by freeform rationales benefit models, similar to human users We investigate the utility of rationales as an additional source of supervision, by varying the quantity and quality of rationales during training. After controlling for instances where rationales leak the correct answer while not providing additional background knowledge, we find that incorporating only 5 of rationales during training can boost model performance by 47.22 for CoSE and 57.14 for ECQA during inference. Moreover, we also show that rationale quality matters compared to crowdsourced rationales, T5generated rationales provide not only weaker supervision to models, but are also not helpful for humans in aiding model interpretability.
Providing a model for the issue of multiperiod ambulance location ; In this study, two mathematical models have been developed for assigning emergency vehicles, namely ambulances, to geographical areas. The first model, which is based on the assignment problem, the ambulance transfer moving ambulances between locations has not been considered. As ambulance transfer can improve system efficiency by decreasing the response time as well as operational cost, we consider this in the second model, which is based on the transportation problem. Both models assume that the demand of all geographical locations must be met. The major contributions of this study are ambulance transfer between locations, day split into several time slots, and demand distribution of the geographical zone. To the best of our knowledge the first two have not been studied before. These extensions allow us to have a more realistic model of the realworld operation. Although, in previous studies, maximizing coverage has been the main objective of the goal, here, minimizing operating costs is a function of the main objective, because we have assumed that the demand of all geographical areas must be met.
The problem of calculating the Bogoliubov coefficient in NonOscillating models ; The calculation of the Bogoliubov coefficients is a key piece to obtain the reheating temperature of the Universe. In all cases this calculation is performed either in toy models where some derivative of the potential is discontinuous at some points or by making some approximations to the model. The result of these calculations is applied to more realistic models without checking if they really apply to this realistic model because the exact calculation of the Bogoliubov coefficients for a viable model, which is usually depicted by a smooth potential, requires a very complicated numerical calculation. Here we want to point out the difficulties that one encounters when trying to compute numerically these coefficients for a smooth potential without making approximations, which could lead to completely different results.
Topological Dirac Sigma Models and the Classical Master Equation ; We present the construction of the classical BatalinVilkovisky action for topological Dirac sigma models. The latter are twodimensional topological field theories that simultaneously generalise the completely gauged WessZuminoNovikovWitten model and the Poisson sigma model. Their underlying structure is that of Dirac manifolds associated to maximal isotropic and integrable subbundles of an exact Courant algebroid twisted by a 3form. In contrast to the Poisson sigma model, the AKSZ construction is not applicable for the general Dirac sigma model. We therefore follow a direct approach for determining a suitable BV extension of the classical action functional with ghosts and antifields satisfying the classical master equation. Special attention is paid on target space covariance, which requires the introduction of two connections with torsion on the Dirac structure.
Performance analysis of highresolution ice sheet simulations ; Numerical ice sheet models compute evolving ice geometry and velocity fields using various stressbalance approximations and boundary conditions. At high spatial resolution, with horizontal meshgrid resolutions of a few kilometers or smaller, these models usually require time steps shorter than climatecoupling time scales because they update ice thickness after each velocity solution. Highresolution performance is degraded by the stability restrictions of such explicit timestepping. This short note, which considers the shallow ice approximation and Stokes models as stressbalance end members, attempts to clarify numerical model performance by quantifying simulation cost per model year in terms of mesh resolution and the number of degrees of freedom. The performance of currentgeneration explicit timestepping models is assessed, and then compared to the prospective performance of implicit schemes. The main result, Table 2, highlights the key roles played by the algorithmic scaling of stressbalance and implicitstep solvers.
A stochastic model solvable without integrability ; We introduce a model with diffusive and evaporationcondensation processes, depending on 3 parameters obeying some inequalities. The model can be solved in the sense that all correlation functions can be computed exactly without the use of integrability. We show that the mean field approximation is not exact in general. This can be shown by looking at the analytical expression of the twopoint correlation functions, that we provide. We confirm our analysis by numerics based on direct diagonalisation of the Markov matrix for small values of the number of sites and also by MonteCarlo simulations for a higher number of sites. Although the model is symmetric in its diffusive rates, it exhibits a left right asymmetry driven by the evaporationcondensation processes. We also argue that the model can be taken as a onedimensional model for catalysis or fracturing processes.
Is neural language acquisition similar to natural A chronological probing study ; The probing methodology allows one to obtain a partial representation of linguistic phenomena stored in the inner layers of the neural network, using external classifiers and statistical analysis. Pretrained transformerbased language models are widely used both for natural language understanding NLU and natural language generation NLG tasks making them most commonly used for downstream applications. However, little analysis was carried out, whether the models were pretrained enough or contained knowledge correlated with linguistic theory. We are presenting the chronological probing study of transformer English models such as MultiBERT and T5. We sequentially compare the information about the language learned by the models in the process of training on corpora. The results show that 1 linguistic information is acquired in the early stages of training 2 both language models demonstrate capabilities to capture various features from various levels of language, including morphology, syntax, and even discourse, while they also can inconsistently fail on tasks that are perceived as easy. We also introduce the opensource framework for chronological probing research, compatible with other transformerbased models. httpsgithub.comEkaterinaVoloshinachronologicalprobing
Orthogonal decomposition of anisotropic constitutive models for the phase field approach to fracture ; We propose a decomposition of constitutive relations into crackdriving and persistent portions, specifically designed for materials with anisotropicorthotropic behavior in the phase field approach to fracture to account for the tensioncompression asymmetry. This decomposition follows a variational framework, satisfying the orthogonality condition for anisotropic materials. This implies that the present model can be applied to arbitrary anisotropic elastic behavior in a threedimensional setting. On this basis, we generalize two existing models for tensioncompression asymmetry in isotropic materials, namely the volumetricdeviatoric model and the notension model, towards materials with anisotropic nature. Two benchmark problems, single notched tensile shear tests, are used to study the performance of the present model. The results can retain the anisotropic constitutive behavior and the tensioncompression asymmetry in the crack response, and are qualitatively in accordance with the expected behavior for orthotropic materials. Furthermore, to study the direction of maximum energy dissipation, we modify the surface integral based energy release computation, Gtheta, to account only for the crackdriving energy. The computed energies with our proposed modifications predict the fracture propagation direction correctly compared with the standard Gtheta method.
Assessing interrater reliability with heterogeneous variance components models Flexible approach accounting for contextual variables ; Interrater reliability IRR, which is a prerequisite of highquality ratings and assessments, may be affected by contextual variables such as the rater's or ratee's gender, major, or experience. Identification of such heterogeneity sources in IRR is important for implementation of policies with the potential to decrease measurement error and to increase IRR by focusing on the most relevant subgroups. In this study, we propose a flexible approach for assessing IRR in cases of heterogeneity due to covariates by directly modeling differences in variance components. We use Bayes factors to select the best performing model, and we suggest using Bayesian modelaveraging as an alternative approach for obtaining IRR and variance component estimates, allowing us to account for model uncertainty. We use inclusion Bayes factors considering the whole model space to provide evidence for or against differences in variance components due to covariates. The proposed method is compared with other Bayesian and frequentist approaches in a simulation study, and we demonstrate its superiority in some situations. Finally, we provide real data examples from grant proposal peerreview, demonstrating the usefulness of this method and its flexibility in the generalization of more complex designs.
Segmenting Moving Objects via an ObjectCentric Layered Representation ; The objective of this paper is a model that is able to discover, track and segment multiple moving objects in a video. We make four contributions First, we introduce an objectcentric segmentation model with a depthordered layer representation. This is implemented using a variant of the transformer architecture that ingests optical flow, where each query vector specifies an object and its layer for the entire video. The model can effectively discover multiple moving objects and handle mutual occlusions; Second, we introduce a scalable pipeline for generating multiobject synthetic training data via layer compositions, that is used to train the proposed model, significantly reducing the requirements for labourintensive annotations, and supporting Sim2Real generalisation; Third, we conduct thorough ablation studies, showing that the model is able to learn object permanence and temporal shape consistency, and is able to predict amodal segmentation masks; Fourth, we evaluate our model, trained only on synthetic data, on standard video segmentation benchmarks, DAVIS, MoCA, SegTrack, FBMS59, and achieve stateoftheart performance among existing methods that do not rely on any manual annotations. With testtime adaptation, we observe further performance boosts.
Finding a Hidden Edge ; We consider the problem of finding an edge in a hidden undirected graph G V, E with n vertices, in a model where we only allowed queries that ask whether or not a subset of vertices contains an edge. We study the nonadaptive model and show that while in the deterministic model the optimal algorithm requires binomn2 queries i.e., querying for any possible edge separately, in the randomized model tildeThetan queries are sufficient and needed in order to find an edge. In addition, we study the query complexity for specific families of graphs, including Stars, Cliques, and Matchings, for both the randomized and deterministic models. Lastly, for general graphs, we show a tradeoff between the query complexity and the number of rounds, r, made by an adaptive algorithm. We present two algorithms with Orn2r and tildeOrn1r sample complexity for the deterministic and randomized models, respectively.
Demographic noise in complex ecological communities ; We introduce an individualbased model of a complex ecological community with random interactions. The model contains a large number of species, each with a finite population of individuals, subject to discrete reproduction and death events. The interaction coefficients determining the rates of these events is chosen from an ensemble of random matrices, and is kept fixed in time. The setup is such that the model reduces to the known generalised LotkaVolterra equations with random interaction coefficients in the limit of an infinite population for each species. Demographic noise in the individualbased model means that species which would survive in the LotkaVolterra model can become extinct. These noisedriven extinctions are the focus of the paper. We find that, for increasing complexity of interactions, ecological communities generally become less prone to extinctions induced by demographic noise. An exception are systems composed entirely of predatorprey pairs. These systems are known to be stable in deterministic LotkaVolterra models with random interactions, but, as we show, they are nevertheless particularly vulnerable to fluctuations.