text
stringlengths
62
2.94k
PaLME An Embodied Multimodal Language Model ; Large language models excel at a wide range of complex tasks. However, enabling general inference in the real world, e.g., for robotics problems, raises the challenge of grounding. We propose embodied language models to directly incorporate realworld continuous sensor modalities into language models and thereby establish the link between words and percepts. Input to our embodied language model are multimodal sentences that interleave visual, continuous state estimation, and textual input encodings. We train these encodings endtoend, in conjunction with a pretrained large language model, for multiple embodied tasks including sequential robotic manipulation planning, visual question answering, and captioning. Our evaluations show that PaLME, a single large embodied multimodal model, can address a variety of embodied reasoning tasks, from a variety of observation modalities, on multiple embodiments, and further, exhibits positive transfer the model benefits from diverse joint training across internetscale language, vision, and visuallanguage domains. Our largest model, PaLME562B with 562B parameters, in addition to being trained on robotics tasks, is a visuallanguage generalist with stateoftheart performance on OKVQA, and retains generalist language capabilities with increasing scale.
ZeroQuantV2 Exploring Posttraining Quantization in LLMs from Comprehensive Study to Low Rank Compensation ; Posttraining quantization PTQ has emerged as a promising technique for mitigating memory consumption and computational costs in large language models LLMs. However, a systematic examination of various quantization schemes, model families, and quantization bit precision has been absent from the literature. In this paper, we conduct a comprehensive analysis of these factors by investigating the effects of PTQ on weightonly, activationonly, and weightandactivation quantization using diverse methods such as roundtonearest RTN, GPTQ, ZeroQuant, and their variants. We apply these methods to two distinct model families with parameters ranging from 125M to 176B. Our contributions include 1 a sensitivity analysis revealing that activation quantization is generally more susceptible to weight quantization, with smaller models often outperforming larger models in terms of activation quantization; 2 an evaluation and comparison of existing PTQ methods to optimize model size reduction while minimizing the impact on accuracy, revealing that none of the current methods can achieve the original model quality for quantization with either INT4weight or INT4weightandINT8activation; 3 based on these insights, we propose an optimized method called LowRank Compensation LoRC, which employs lowrank matrices to enhance model quality recovery with a minimal increase in model size.
AnimeDiffusion Anime Face Line Drawing Colorization via Diffusion Models ; It is a timeconsuming and tedious work for manually colorizing anime line drawing images, which is an essential stage in cartoon animation creation pipeline. Referencebased line drawing colorization is a challenging task that relies on the precise crossdomain longrange dependency modelling between the line drawing and reference image. Existing learning methods still utilize generative adversarial networks GANs as one key module of their model architecture. In this paper, we propose a novel method called AnimeDiffusion using diffusion models that performs anime face line drawing colorization automatically. To the best of our knowledge, this is the first diffusion model tailored for anime content creation. In order to solve the huge training consumption problem of diffusion models, we design a hybrid training strategy, first pretraining a diffusion model with classifierfree guidance and then finetuning it with image reconstruction guidance. We find that with a few iterations of finetuning, the model shows wonderful colorization performance, as illustrated in Fig. 1. For training AnimeDiffusion, we conduct an anime face line drawing colorization benchmark dataset, which contains 31696 training data and 579 testing data. We hope this dataset can fill the gap of no available high resolution anime face dataset for colorization method evaluation. Through multiple quantitative metrics evaluated on our dataset and a user study, we demonstrate AnimeDiffusion outperforms stateoftheart GANsbased models for anime face line drawing colorization. We also collaborate with professional artists to test and apply our AnimeDiffusion for their creation work. We release our code on httpsgithub.comxqmengAnimeDiffusion.
DIPPM a Deep Learning Inference Performance Predictive Model using Graph Neural Networks ; Deep Learning DL has developed to become a cornerstone in many everyday applications that we are now relying on. However, making sure that the DL model uses the underlying hardware efficiently takes a lot of effort. Knowledge about inference characteristics can help to find the right match so that enough resources are given to the model, but not too much. We have developed a DL Inference Performance Predictive Model DIPPM that predicts the inference latency, energy, and memory usage of a given input DL model on the NVIDIA A100 GPU. We also devised an algorithm to suggest the appropriate A100 MultiInstance GPU profile from the output of DIPPM. We developed a methodology to convert DL models expressed in multiple frameworks to a generalized graph structure that is used in DIPPM. It means DIPPM can parse input DL models from various frameworks. Our DIPPM can be used not only helps to find suitable hardware configurations but also helps to perform rapid designspace exploration for the inference performance of a model. We constructed a graph multiregression dataset consisting of 10,508 different DL models to train and evaluate the performance of DIPPM, and reached a resulting Mean Absolute Percentage Error MAPE as low as 1.9.
Controllable Inversion of BlackBox Face Recognition Models via Diffusion ; Face recognition models embed a face image into a lowdimensional identity vector containing abstract encodings of identityspecific facial features that allow individuals to be distinguished from one another. We tackle the challenging task of inverting the latent space of pretrained face recognition models without full model access i.e. blackbox setting. A variety of methods have been proposed in literature for this task, but they have serious shortcomings such as a lack of realistic outputs and strong requirements for the data set and accessibility of the face recognition model. By analyzing the blackbox inversion problem, we show that the conditional diffusion model loss naturally emerges and that we can effectively sample from the inverse distribution even without an identityspecific loss. Our method, named identity denoising diffusion probabilistic model ID3PM, leverages the stochastic nature of the denoising diffusion process to produce highquality, identitypreserving face images with various backgrounds, lighting, poses, and expressions. We demonstrate stateoftheart performance in terms of identity preservation and diversity both qualitatively and quantitatively, and our method is the first blackbox face recognition model inversion method that offers intuitive control over the generation process.
Attention Dynamic Epistemic Logic Models of Inattentive Agents ; Attention is the crucial cognitive ability that limits and selects what information we observe. Previous work by Bolander et al. 2016 proposes a model of attention based on dynamic epistemic logic DEL where agents are either fully attentive or not attentive at all. While introducing the realistic feature that inattentive agents believe nothing happens, the model does not represent the most essential aspect of attention its selectivity. Here, we propose a generalization that allows for paying attention to subsets of atomic formulas. We introduce the corresponding logic for propositional attention, and show its axiomatization to be sound and complete. We then extend the framework to account for inattentive agents that, instead of assuming nothing happens, may default to a specific truthvalue of what they failed to attend to a sort of prior concerning the unattended atoms. This feature allows for a more cognitively plausible representation of the inattentional blindness phenomenon, where agents end up with false beliefs due to their failure to attend to conspicuous but unexpected events. Both versions of the model define attentionbased learning through appropriate DEL event models based on a few and clear edge principles. While the size of such event models grow exponentially both with the number of agents and the number of atoms, we introduce a new logical language for describing event models syntactically and show that using this language our event models can be represented linearly in the number of agents and atoms. Furthermore, representing our event models using this language is achieved by a straightforward formalisation of the aforementioned edge principles.
TextMI Textualize Multimodal Information for Integrating Nonverbal Cues in Pretrained Language Models ; Pretrained large language models have recently achieved groundbreaking performance in a wide variety of language understanding tasks. However, the same model can not be applied to multimodal behavior understanding tasks e.g., video sentimenthumor detection unless nonverbal features e.g., acoustic and visual can be integrated with language. Jointly modeling multiple modalities significantly increases the model complexity, and makes the training process datahungry. While an enormous amount of text data is available via the web, collecting largescale multimodal behavioral video datasets is extremely expensive, both in terms of time and money. In this paper, we investigate whether large language models alone can successfully incorporate nonverbal information when they are presented in textual form. We present a way to convert the acoustic and visual information into corresponding textual descriptions and concatenate them with the spoken text. We feed this augmented input to a pretrained BERT model and finetune it on three downstream multimodal tasks sentiment, humor, and sarcasm detection. Our approach, TextMI, significantly reduces model complexity, adds interpretability to the model's decision, and can be applied for a diverse set of tasks while achieving superior multimodal sarcasm detection or near SOTA multimodal sentiment analysis and multimodal humor detection performance. We propose TextMI as a general, competitive baseline for multimodal behavioral analysis tasks, particularly in a lowresource setting.
CMCASL Comparisonbased Performance Modeling of Software Systems via Collaborative Active and Semisupervised Learning ; Configuration tuning for large software systems is generally challenging due to the complex configuration space and expensive performance evaluation. Most existing approaches follow a twophase process, first learning a regressionbased performance prediction model on available samples and then searching for the configurations with satisfactory performance using the learned model. Such regressionbased models often suffer from the scarcity of samples due to the enormous time and resources required to run a large software system with a specific configuration. Moreover, previous studies have shown that even a highly accurate regressionbased model may fail to discern the relative merit between two configurations, whereas performance comparison is actually one fundamental strategy for configuration tuning. To address these issues, this paper proposes CMCASL, a Comparisonbased performance Modeling approach for software systems via Collaborative Active and Semisupervised Learning. CMCASL learns a classification model that compares the performance of two given configurations, and enhances the samples through a collaborative labeling process by both human experts and classifiers using an integration of active and semisupervised learning. Experimental results demonstrate that CMCASL outperforms two stateoftheart performance modeling approaches in terms of both classification accuracy and rank accuracy, and thus provides a better performance model for the subsequent work of configuration tuning.
Rethinking interpretation Inputagnostic saliency mapping of deep visual classifiers ; Saliency methods provide posthoc model interpretation by attributing input features to the model outputs. Current methods mainly achieve this using a single input sample, thereby failing to answer inputindependent inquiries about the model. We also show that inputspecific saliency mapping is intrinsically susceptible to misleading feature attribution. Current attempts to use 'general' input features for model interpretation assume access to a dataset containing those features, which biases the interpretation. Addressing the gap, we introduce a new perspective of inputagnostic saliency mapping that computationally estimates the highlevel features attributed by the model to its outputs. These features are geometrically correlated, and are computed by accumulating model's gradient information with respect to an unrestricted data distribution. To compute these features, we nudge independent data points over the model loss surface towards the local minima associated by a humanunderstandable concept, e.g., class label for classifiers. With a systematic projection, scaling and refinement process, this information is transformed into an interpretable visualization without compromising its modelfidelity. The visualization serves as a standalone qualitative interpretation. With an extensive evaluation, we not only demonstrate successful visualizations for a variety of concepts for largescale models, but also showcase an interesting utility of this new form of saliency mapping by identifying backdoor signatures in compromised classifiers.
MultiModal Perceiver Language Model for Outcome Prediction in Emergency Department ; Language modeling have shown impressive progress in generating compelling text with good accuracy and high semantic coherence. An interesting research direction is to augment these powerful models for specific applications using contextual information. In this work, we explore multimodal language modeling for healthcare applications. We are interested in outcome prediction and patient triage in hospital emergency department based on text information in chief complaints and vital signs recorded at triage. We adapt Perceiver a modalityagnostic transformerbased model that has shown promising results in several applications. Since vitalsign modality is represented in tabular format, we modified Perceiver position encoding to ensure permutation invariance. We evaluated the multimodal language model for the task of diagnosis code prediction using MIMICIV ED dataset on 120K visits. In the experimental analysis, we show that mutlimodality improves the prediction performance compared with models trained solely on text or vital signs. We identified disease categories for which multimodality leads to performance improvement and show that for these categories, vital signs have added predictive power. By analyzing the crossattention layer, we show how multimodality contributes to model predictions. This work gives interesting insights on the development of multimodal language models for healthcare applications.
Zeroshot Medical Image Translation via FrequencyGuided Diffusion Models ; Recently, the diffusion model has emerged as a superior generative model that can produce highquality images with excellent realism. There is a growing interest in applying diffusion models to image translation tasks. However, for medical image translation, the existing diffusion models are deficient in accurately retaining structural information since the structure details of source domain images are lost during the forward diffusion process and cannot be fully recovered through learned reverse diffusion, while the integrity of anatomical structures is extremely important in medical images. Training and conditioning diffusion models using paired source and target images with matching anatomy can help. However, such paired data are very difficult and costly to obtain, and may also reduce the robustness of the developed model to outofdistribution testing data. We propose a frequencyguided diffusion model FGDM that employs frequencydomain filters to guide the diffusion model for structurepreserving image translation. Based on its design, FGDM allows zeroshot learning, as it can be trained solely on the data from the target domain, and used directly for sourcetotarget domain translation without any exposure to the sourcedomain data during training. We trained FGDM solely on the headandneck CT data, and evaluated it on both headandneck and lung conebeam CT CBCTtoCT translation tasks. FGDM outperformed the stateoftheart methods GANbased, VAEbased, and diffusionbased in all metrics, showing its significant advantages in zeroshot medical image translation.
Tailored MultiOrgan Segmentation with Model Adaptation and Ensemble ; Multiorgan segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multiorgan segmentation tasks. However, due to expensive labor costs and expertise, the availability of multiorgan annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learningbased methods. In this paper, we aim to address this issue by combining offtheshelf singleorgan segmentation models to develop a multiorgan segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multiorgan segmentation. To this end, we propose a novel dualstage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each offtheshelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted singleorgan segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage offtheshelf singleorgan segmentation models to obtain a tailored model for multiorgan segmentation with high accuracy.
PBNR Promptbased News Recommender System ; Online news platforms often use personalized news recommendation methods to help users discover articles that align with their interests. These methods typically predict a matching score between a user and a candidate article to reflect the user's preference for the article. Some previous works have used language model techniques, such as the attention mechanism, to capture users' interests based on their past behaviors, and to understand the content of articles. However, these existing model architectures require adjustments if additional information is taken into account. Pretrained large language models, which can better capture word relationships and comprehend contexts, have seen a significant development in recent years, and these pretrained models have the advantages of transfer learning and reducing the training time for downstream tasks. Meanwhile, prompt learning is a newly developed technique that leverages pretrained language models by building taskspecific guidance for output generations. To leverage textual information in news articles, this paper introduces the pretrained large language model and promptlearning to the community of news recommendation. The proposed model promptbased news recommendation PBNR treats the personalized news recommendation as a texttotext language task and designs personalized prompts to adapt to the pretrained language model texttotext transfer transformer T5. Experimental studies using the Microsoft News dataset show that PBNR is capable of making accurate recommendations by taking into account various lengths of past behaviors of different users. PBNR can also easily adapt to new information without changing the model architecture and the training objective. Additionally, PBNR can make recommendations based on users' specific requirements, allowing humancomputer interaction in the news recommendation field.
STraM a framework for strategic national freight transport modeling ; To achieve carbon emission targets worldwide, decarbonization of the freight transport sector will be an important factor. To this end, national governments must make plans that facilitate this transition. National freight transport models are a useful tool to assess what the effects of various policies and investments may be. The state of the art consists of very detailed, static models. While useful for shortterm policy assessment, these models are less suitable for the longterm planning necessary to facilitate the transition to lowcarbon transportation in the upcoming decades. In this paper, we fill this gap by developing a framework for strategic national freight transport modeling, which we call STraM, and which can be characterized as a multiperiod stochastic network design model, based on a multimodal freight transport formulation. In STraM, we explicitly include several aspects that are lacking in stateofthe art national freight transport models the dynamic nature of longterm planning, as well as new, lowcarbon fuel technologies and longterm uncertainties in the development of these technologies. We illustrate our model using a case study of Norway and discuss the resulting insights. In particular, we demonstrate the relevance of modeling multiple time periods, the importance of including longterm uncertainty in technology development, and the efficacy of carbon pricing.
AttackSAM Towards Attacking Segment Anything Model With Adversarial Examples ; Segment Anything Model SAM has attracted significant attention recently, due to its impressive performance on various downstream tasks in a zeroshort manner. Computer vision CV area might follow the natural language processing NLP area to embark on a path from taskspecific vision models toward foundation models. However, deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation. Such vulnerability to adversarial attacks causes serious concerns when applying deep models to securitysensitive applications. Therefore, it is critical to know whether the vision foundation model SAM can also be fooled by adversarial attacks. To the best of our knowledge, our work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples. With the basic attack goal set to mask removal, we investigate the adversarial robustness of SAM in the full whitebox setting and transferbased blackbox settings. Beyond the basic goal of mask removal, we further investigate and find that it is possible to generate any desired mask by the adversarial attack.
CHAMELEON OutSystems Live Bidirectional Transformations ; In modeldriven engineering, the bidirectional transformation of models plays a crucial role in facilitating the use of editors that operate at different levels of abstraction. This is particularly important in the context of industrialgrade lowcode platforms like OutSystems, which feature a comprehensive ecosystem of tools that complement the standard integrated development environment with domainspecific builders and abstract model viewers. We introduce CHAMELEON, a tool that enables the dynamic definition of a live bidirectional model transformation in a declarative manner by leveraging simple and intuitive component patterns. Through this approach, we can gradually define the view and synthesis paths to an abstract model built on top of a lowcode metamodel. We devise a standard parsergenerating technique for treelike models that builds upon extended grammar definitions with constraints and name binders. We allow for a greater overlap of model patterns that can still be disambiguated for a clear lenslike behaviour of the transformation. CHAMELEON is evaluated in the fragment of the OutSystems language targeting the definition of user interfaces. To assess performance we used a large set of real OutSystems applications, with approximately 200K UI widgets, and a database of curated widget patterns. We found a worstcase processing time of 92ms for complete models in our benchmark, which is still suitable for the operation of an interactive model editor.
Anatomically Detailed Simulation of Human Torso ; Existing digital human models approximate the human skeletal system using rigid bodies connected by rotational joints. While the simplification is considered acceptable for legs and arms, it significantly lacks fidelity to model rich torso movements in common activities such as dancing, Yoga, and various sports. Research from biomechanics provides more detailed modeling for parts of the torso, but their models often operate in isolation and are not fast and robust enough to support computationally heavy applications and largescale data generation for fullbody digital humans. This paper proposes a new torso model that aims to achieve high fidelity both in perception and in functionality, while being computationally feasible for simulation and optimal control tasks. We build a detailed human torso model consisting of various anatomical components, including facets, ligaments, and intervertebral discs, by coupling efficient finiteelement and rigidbody simulations. Given an existing motion capture sequence without dense markers placed on the torso, our new model is able to recover the underlying torso bone movements. Our method is remarkably robust that it can be used to automatically retrofit the entire Mixamo motion database of highly diverse human motions without user intervention. We also show that our model is computationally efficient for solving trajectory optimization of highly dynamic fullbody movements, without relying on any reference motion. Physiological validity of the model is validated against established literature.
FedDWA Personalized Federated Learning with Dynamic Weight Adjustment ; Different from conventional federated learning, personalized federated learning PFL is able to train a customized model for each individual client according to its unique requirement. The mainstream approach is to adopt a kind of weighted aggregation method to generate personalized models, in which weights are determined by the loss value or model parameters among different clients. However, such kinds of methods require clients to download others' models. It not only sheer increases communication traffic but also potentially infringes data privacy. In this paper, we propose a new PFL algorithm called emphFedDWA Federated Learning with Dynamic Weight Adjustment to address the above problem, which leverages the parameter server PS to compute personalized aggregation weights based on collected models from clients. In this way, FedDWA can capture similarities between clients with much less communication overhead. More specifically, we formulate the PFL problem as an optimization problem by minimizing the distance between personalized models and guidance models, so as to customize aggregation weights for each client. Guidance models are obtained by the local onestep ahead adaptation on individual clients. Finally, we conduct extensive experiments using five real datasets and the results demonstrate that FedDWA can significantly reduce the communication traffic and achieve much higher model accuracy than the stateoftheart approaches.
Uncertainty Quantification of a Wind TunnelInformed Stochastic Wind Load Model for Wind Engineering Applications ; The simulation of stochastic wind loads is necessary for many applications in wind engineering. The proper orthogonal decomposition PODbased spectral representation method is a popular approach used for this purpose due to its computational efficiency. For general wind directions and building configurations, the datadriven PODbased stochastic model is an alternative that uses wind tunnel smoothed auto and crossspectral density as input to calibrate the eigenvalues and eigenvectors of the target load process. Even though this method is straightforward and presents advantages compared to using empirical target auto and crossspectral density, the limitations and errors associated with this model have not been investigated. To this end, an extensive experimental study on a rectangular building model considering multiple wind directions and configurations was conducted to allow the quantification of uncertainty related to the use of wind tunnel data for calibration and validation of the datadriven PODbased stochastic model. Errors associated with the use of typical wind tunnel records for model calibration, the model itself, and the truncation of modes were quantified. Results demonstrate that the datadriven model can efficiently simulate stochastic wind loads with negligible model errors, while the errors associated with calibration to typical wind tunnel data can be important.
PerFedRec Enhancing Personalized Federated Recommendation with SelfSupervised PreTraining ; Federated recommendation systems employ federated learning techniques to safeguard user privacy by transmitting model parameters instead of raw user data between user devices and the central server. Nevertheless, the current federated recommender system faces challenges such as heterogeneity and personalization, model performance degradation, and communication bottleneck. Previous studies have attempted to address these issues, but none have been able to solve them simultaneously. In this paper, we propose a novel framework, named PerFedRec, to enhance the personalized federated recommendation with selfsupervised pretraining. Specifically, we utilize the privacypreserving mechanism of federated recommender systems to generate two augmented graph views, which are used as contrastive tasks in selfsupervised graph learning to pretrain the model. Pretraining enhances the performance of federated models by improving the uniformity of representation learning. Also, by providing a better initial state for federated training, pretraining makes the overall training converge faster, thus alleviating the heavy communication burden. We then construct a collaborative graph to learn the client representation through a federated graph neural network. Based on these learned representations, we cluster users into different user groups and learn personalized models for each cluster. Each user learns a personalized model by combining the global federated model, the clusterlevel federated model, and its own finetuned local model. Experiments on three realworld datasets show that our proposed method achieves superior performance over existing methods.
Energy cost and machine learning accuracy impact of kanonymisation and synthetic data techniques ; To address increasing societal concerns regarding privacy and climate, the EU adopted the General Data Protection Regulation GDPR and committed to the Green Deal. Considerable research studied the energy efficiency of software and the accuracy of machine learning models trained on anonymised data sets. Recent work began exploring the impact of privacyenhancing techniques PET on both the energy consumption and accuracy of the machine learning models, focusing on kanonymity. As synthetic data is becoming an increasingly popular PET, this paper analyses the energy consumption and accuracy of two phases a applying privacyenhancing techniques to the concerned data set, b training the models on the concerned privacyenhanced data set. We use two privacyenhancing techniques kanonymisation using generalisation and suppression and synthetic data, and three machinelearning models. Each model is trained on each privacyenhanced data set. Our results show that models trained on kanonymised data consume less energy than models trained on the original data, with a similar performance regarding accuracy. Models trained on synthetic data have a similar energy consumption and a similar to lower accuracy compared to models trained on the original data.
ConceptCentric Transformers Enhancing Model Interpretability through ObjectCentric Concept Learning within a Shared Global Workspace ; To explain blackbox properties of AI models, many approaches, such as post hoc and intrinsically interpretable models, have been proposed to provide plausible explanations that identify humanunderstandable featuresconcepts that a trained model uses to make predictions, and attention mechanisms have been widely used to aid in model interpretability by visualizing that information. However, the problem of configuring an interpretable model that effectively communicates and coordinates among computational modules has received less attention. A recently proposed shared global workspace theory demonstrated that networks of distributed modules can benefit from sharing information with a bandwidthlimited working memory because the communication constraints encourage specialization, compositionality, and synchronization among the modules. Inspired by this, we consider how such shared working memories can be realized to build intrinsically interpretable models with better interpretability and performance. Toward this end, we propose ConceptCentric Transformers, a simple yet effective configuration of the shared global workspace for interpretability consisting of i an objectcentricbased architecture for extracting semantic concepts from input features, ii a crossattention mechanism between the learned concept and input embeddings, and iii standard classification and additional explanation losses to allow human analysts to directly assess an explanation for the model's classification reasoning. We test our approach against other existing conceptbased methods on classification tasks for various datasets, including CIFAR100 superclasses, CUB2002011 bird species, and ImageNet, and we show that our model achieves better classification accuracy than all selected methods across all problems but also generates more consistent conceptbased explanations of classification output.
Lexinvariant Language Models ; Token embeddings, a mapping from discrete lexical symbols to continuous vectors, are at the heart of any language model LM. However, lexical symbol meanings can also be determined and even redefined by their structural role in a long context. In this paper, we ask is it possible for a language model to be performant without emphany fixed token embeddings Such a language model would have to rely entirely on the cooccurence and repetition of tokens in the context rather than the textita priori identity of any token. To answer this, we study textitlexinvariantlanguage models that are invariant to lexical symbols and therefore do not need fixed token embeddings in practice. First, we prove that we can construct a lexinvariant LM to converge to the true language model at a uniform rate that is polynomial in terms of the context length, with a constant factor that is sublinear in the vocabulary size. Second, to build a lexinvariant LM, we simply encode tokens using random Gaussian vectors, such that each token maps to the same representation within each sequence but different representations across sequences. Empirically, we demonstrate that it can indeed attain perplexity comparable to that of a standard language model, given a sufficiently long context. We further explore two properties of the lexinvariant language models First, given text generated from a substitution cipher of English, it implicitly implements Bayesian incontext deciphering and infers the mapping to the underlying real tokens with high accuracy. Second, it has on average 4X better accuracy over synthetic incontext reasoning tasks. Finally, we discuss regularizing standard language models towards lexinvariance and potential practical applications.
Modeling Dual PeriodVarying Preferences for Takeaway Recommendation ; Takeaway recommender systems, which aim to accurately provide stores that offer foods meeting users' interests, have served billions of users in our daily life. Different from traditional recommendation, takeaway recommendation faces two main challenges 1 Dual InteractionAware Preference Modeling. Traditional recommendation commonly focuses on users' single preferences for items while takeaway recommendation needs to comprehensively consider users' dual preferences for stores and foods. 2 PeriodVarying Preference Modeling. Conventional recommendation generally models continuous changes in users' preferences from a sessionlevel or daylevel perspective. However, in practical takeaway systems, users' preferences vary significantly during the morning, noon, night, and late night periods of the day. To address these challenges, we propose a Dual PeriodVarying Preference modeling DPVP for takeaway recommendation. Specifically, we design a dual interactionaware module, aiming to capture users' dual preferences based on their interactions with stores and foods. Moreover, to model various preferences in different time periods of the day, we propose a timebased decomposition module as well as a timeaware gating mechanism. Extensive offline and online experiments demonstrate that our model outperforms stateoftheart methods on realworld datasets and it is capable of modeling the dual periodvarying preferences. Moreover, our model has been deployed online on Meituan Takeaway platform, leading to an average improvement in GMV Gross Merchandise Value of 0.70.
StyleTTS 2 Towards HumanLevel TexttoSpeech through Style Diffusion and Adversarial Training with Large Speech Language Models ; In this paper, we present StyleTTS 2, a texttospeech TTS model that leverages style diffusion and adversarial training with large speech language models SLMs to achieve humanlevel TTS synthesis. StyleTTS 2 differs from its predecessor by modeling styles as a latent random variable through diffusion models to generate the most suitable style for the text without requiring reference speech, achieving efficient latent diffusion while benefiting from the diverse speech synthesis offered by diffusion models. Furthermore, we employ large pretrained SLMs, such as WavLM, as discriminators with our novel differentiable duration modeling for endtoend training, resulting in improved speech naturalness. StyleTTS 2 surpasses human recordings on the singlespeaker LJSpeech dataset and matches it on the multispeaker VCTK dataset as judged by native English speakers. Moreover, when trained on the LibriTTS dataset, our model outperforms previous publicly available models for zeroshot speaker adaptation. This work achieves the first humanlevel TTS on both single and multispeaker datasets, showcasing the potential of style diffusion and adversarial training with large SLMs. The audio demos and source code are available at httpsstyletts2.github.io.
A new computational perceived risk model for automated vehicles based on potential collision avoidance difficulty PCAD ; Perceived risk is crucial in designing trustworthy and acceptable vehicle automation systems. However, our understanding of its dynamics is limited, and models for perceived risk dynamics are scarce in the literature. This study formulates a new computational perceived risk model based on potential collision avoidance difficulty PCAD for drivers of SAE level 2 driving automation. PCAD uses the 2D safe velocity gap as the potential collision avoidance difficulty, and takes into account collision severity. The safe velocity gap is defined as the 2D gap between the current velocity and the safe velocity region, and represents the amount of braking and steering needed, considering behavioural uncertainty of neighbouring vehicles and imprecise control of the subject vehicle. The PCAD predicts perceived risk both in continuous time and per event. We compare the PCAD model with three stateoftheart models and analyse the models both theoretically and empirically with two unique datasets Dataset Merging and Dataset Obstacle Avoidance. The PCAD model generally outperforms the other models in terms of model error, detection rate, and the ability to accurately capture the tendencies of human drivers' perceived risk, albeit at a longer computation time. Additionally, the study shows that the perceived risk is not static and varies with the surrounding traffic conditions. This research advances our understanding of perceived risk in automated driving and paves the way for improved safety and acceptance of driving automation systems.
Catastrophic Forgetting in the Context of Model Updates ; A large obstacle to deploying deep learning models in practice is the process of updating models postdeployment ideally, frequently. Deep neural networks can cost many thousands of dollars to train. When new data comes in the pipeline, you can train a new model from scratch randomly initialized weights on all existing data. Instead, you can take an existing model and finetune continue to train it on new data. The former is costly and slow. The latter is cheap and fast, but catastrophic forgetting generally causes the new model to 'forget' how to classify older data well. There are a plethora of complicated techniques to keep models from forgetting their past learnings. Arguably the most basic is to mix in a small amount of past data into the new data during finetuning also known as 'data rehearsal'. In this paper, we compare various methods of limiting catastrophic forgetting and conclude that if you can maintain access to a portion of your past data or tasks, data rehearsal is ideal in terms of overall accuracy across all time periods, and performs even better when combined with methods like Elastic Weight Consolidation EWC. Especially when the amount of past data past 'tasks' is large compared to new data, the cost of updating an existing model is far cheaper and faster than training a new model from scratch.
RemoteCLIP A Vision Language Foundation Model for Remote Sensing ; Generalpurpose foundation models have become increasingly important in the field of artificial intelligence. While selfsupervised learning SSL and Masked Image Modeling MIM have led to promising results in building such foundation models for remote sensing, these models primarily learn lowlevel features, require annotated data for finetuning, and not applicable for retrieval and zeroshot applications due to the lack of language understanding. In response to these limitations, we propose RemoteCLIP, the first visionlanguage foundation model for remote sensing that aims to learn robust visual features with rich semantics, as well as aligned text embeddings for seamless downstream application. To address the scarcity of pretraining data, we leverage data scaling, converting heterogeneous annotations based on BoxtoCaption B2C and MasktoBox M2B conversion, and further incorporating UAV imagery, resulting a 12xlarger pretraining dataset. RemoteCLIP can be applied to a variety of downstream tasks, including zeroshot image classification, linear probing, kNN classification, fewshot classification, imagetext retrieval, and object counting. Evaluations on 16 datasets, including a newly introduced RemoteCount benchmark to test the object counting ability, show that RemoteCLIP consistently outperforms baseline foundation models across different model scales. Impressively, RemoteCLIP outperform previous SoTA by 9.14 mean recall on RSICD dataset and by 8.92 on RSICD dataset. For zeroshot classification, our RemoteCLIP outperform CLIP baseline by up to 6.39 average accuracy on 12 downstream datasets.Pretrained models is available at httpsgithub.comChenDelong1999RemoteCLIP .
Towards Characterizing Domain Counterfactuals For Invertible Latent Causal Models ; Learning latent causal models from data has many important applications such as robustness, model extrapolation, and counterfactuals. Most prior theoretic work has focused on full causal discovery i.e., recovering the true latent variables but requires strong assumptions such as linearity or fails to have any analysis of the equivalence class of solutions e.g., IRM. Instead of full causal discovery, we focus on a specific type of causal query called the domain counterfactual, which hypothesizes what a sample would have looked like if it had been generated in a different domain or environment. Concretely, we assume domainspecific invertible latent structural causal models and a shared invertible observation function, both of which are less restrictive assumptions than prior theoretic works. Under these assumptions, we define domain counterfactually equivalent models and prove that any model can be transformed into an equivalent model via two invertible functions. This constructive property provides a tight characterization of the domain counterfactual equivalence classes. Building upon this result, we prove that every equivalence class contains a model where all intervened variables are at the end when topologically sorted by the causal DAG, i.e., all nonintervened variables have nonintervened ancestors. This surprising result suggests that an algorithm that only allows intervention in the last k latent variables may improve model estimation for counterfactuals. In experiments, we enforce the sparse intervention hypothesis via this theoretic result by constraining that the latent SCMs can only differ in the last few causal mechanisms and demonstrate the feasibility of this algorithm in simulated and imagebased experiments.
Sparse Bayesian Estimation of Parameters in LinearGaussian StateSpace Models ; Statespace models SSMs are a powerful statistical tool for modelling timevarying systems via a latent state. In these models, the latent state is never directly observed. Instead, a sequence of data points related to the state are obtained. The linearGaussian statespace model is widely used, since it allows for exact inference when all model parameters are known, however this is rarely the case. The estimation of these parameters is a very challenging but essential task to perform inference and prediction. In the linearGaussian model, the state dynamics are described via a state transition matrix. This model parameter is known to behard to estimate, since it encodes the relationships between the state elements, which are never observed. In many applications, this transition matrix is sparse since not all state components directly affect all other state components. However, most parameter estimation methods do not exploit this feature. In this work we propose SpaRJ, a fully probabilistic Bayesian approach that obtains sparse samples from the posterior distribution of the transition matrix. Our method explores sparsity by traversing a set of models that exhibit differing sparsity patterns in the transition matrix. Moreover, we also design new effective rules to explore transition matrices within the same level of sparsity. This novel methodology has strong theoretical guarantees, and unveils the latent structure of the data generating process, thereby enhancing interpretability. The performance of SpaRJ is showcased in example with dimension 144 in the parameter space, and in a numerical example with real data.
MPSTAN Metapopulationbased SpatioTemporal Attention Network for Epidemic Forecasting ; Accurate epidemic forecasting plays a vital role for governments in developing effective prevention measures for suppressing epidemics. Most of the present spatiotemporal models cannot provide a general framework for stable, and accurate forecasting of epidemics with diverse evolution trends. Incorporating epidemiological domain knowledge ranging from singlepatch to multipatch into neural networks is expected to improve forecasting accuracy. However, relying solely on singlepatch knowledge neglects interpatch interactions, while constructing multipatch knowledge is challenging without population mobility data. To address the aforementioned problems, we propose a novel hybrid model called Metapopulationbased SpatioTemporal Attention Network MPSTAN. This model aims to improve the accuracy of epidemic forecasting by incorporating multipatch epidemiological knowledge into a spatiotemporal model and adaptively defining interpatch interactions. Moreover, we incorporate interpatch epidemiological knowledge into both the model construction and loss function to help the model learn epidemic transmission dynamics. Extensive experiments conducted on two representative datasets with different epidemiological evolution trends demonstrate that our proposed model outperforms the baselines and provides more accurate and stable short and longterm forecasting. We confirm the effectiveness of domain knowledge in the learning model and investigate the impact of different ways of integrating domain knowledge on forecasting. We observe that using domain knowledge in both model construction and loss functions leads to more efficient forecasting, and selecting appropriate domain knowledge can improve accuracy further.
Transferable and Robust Machine Learning Model for Predicting Stability of Si Anodes for Multivalent Cation Batteries ; Datadriven methodology has become a key tool in computationally predicting material properties. Currently, these techniques are priced high due to computational requirements for generating sufficient training data for highprecision machine learning models. In this study, we present a Support Vector Regression SVRbased machine learning model to predict the stability of silicon Si alkaline metal alloys, with a strong emphasis on the transferability of the model to new silicon alloys with different electronic configurations and structures. We elaborate on the role of the structural descriptor in imparting transferability to the model that is trained on limited data 750 Si alloys derived from the Material Project database. Three popular descriptors, namely XRay Diffraction XRD, Sine Coulomb Matrix SCM, and Orbital Field Matrix OFM, are evaluated for representing Si alloys. The material structures are represented by descriptors in the SVR model, coupled with hyperparameter tuning techniques like Grid Search CV and Bayesian Optimization BO, to find the best performing model for predicting total energy, formation energy and packing fraction of the Si alloy systems. The models are trained on Si alloys with lithium Li, sodium Na, potassium K, magnesium Mg, calcium Ca, and aluminum Al metals, where SiNa and SiAl systems are used as test structures. Our results show that XRD, an experimentally derived characterization of structures, performs most reliably as a descriptor for total energy prediction of new Si alloys. The study demonstrates that by qualitatively selection of training data, using hyperparameter tuning methods, and employing appropriate structural descriptors, the data requirements for robust and accurate ML models can be reduced.
Symbol emergence as interpersonal crosssituational learning the emergence of lexical knowledge with combinatoriality ; We present a computational model for a symbol emergence system that enables the emergence of lexical knowledge with combinatoriality among agents through a MetropolisHastings naming game and crosssituational learning. Many computational models have been proposed to investigate combinatoriality in emergent communication and symbol emergence in cognitive and developmental robotics. However, existing models do not sufficiently address category formation based on sensorymotor information and semiotic communication through the exchange of word sequences within a single integrated model. Our proposed model facilitates the emergence of lexical knowledge with combinatoriality by performing category formation using multimodal sensorymotor information and enabling semiotic communication through the exchange of word sequences among agents in a unified model. Furthermore, the model enables an agent to predict sensorymotor information for unobserved situations by combining words associated with categories in each modality. We conducted two experiments with two humanoid robots in a simulated environment to evaluate our proposed model. The results demonstrated that the agents can acquire lexical knowledge with combinatoriality through interpersonal crosssituational learning based on the MetropolisHastings naming game and crosssituational learning. Furthermore, our results indicate that the lexical knowledge developed using our proposed model exhibits generalization performance for novel situations through interpersonal crossmodal inference.
Learning Environment Models with Continuous Stochastic Dynamics ; Solving control tasks in complex environments automatically through learning offers great potential. While contemporary techniques from deep reinforcement learning DRL provide effective solutions, their decisionmaking is not transparent. We aim to provide insights into the decisions faced by the agent by learning an automaton model of environmental behavior under the control of an agent. However, for most control problems, automata learning is not scalable enough to learn a useful model. In this work, we raise the capabilities of automata learning such that it is possible to learn models for environments that have complex and continuous dynamics. The core of the scalability of our method lies in the computation of an abstract statespace representation, by applying dimensionality reduction and clustering on the observed environmental state space. The stochastic transitions are learned via passive automata learning from observed interactions of the agent and the environment. In an iterative modelbased RL process, we sample additional trajectories to learn an accurate environment model in the form of a discretestate Markov decision process MDP. We apply our automata learning framework on popular RL benchmarking environments in the OpenAI Gym, including LunarLander, CartPole, Mountain Car, and Acrobot. Our results show that the learned models are so precise that they enable the computation of policies solving the respective control tasks. Yet the models are more concise and more general than neuralnetworkbased policies and by using MDPs we benefit from a wealth of tools available for analyzing them. When solving the task of LunarLander, the learned model even achieved similar or higher rewards than deep RL policies learned with stablebaselines3.
Modeling Parallel Programs using Large Language Models ; Parallel software codes in high performance computing HPC continue to grow in complexity and scale as we enter the exascale era. A diverse set of emerging hardware and programming paradigms make developing, optimizing, and maintaining parallel software burdensome for developers. One way to alleviate some of these burdens is with automated development and analysis tools. Such tools can perform complex andor remedial tasks for developers that increase their productivity and decrease the chance for error. So far, such tools for code development and performance analysis have been limited in the complexity of tasks they can perform. However, with recent advancements in language modeling, and the wealth of code related data that is now available online, these tools have started to utilize predictive language models to automate more complex tasks. In this paper, we show how large language models LLMs can be applied to tasks specific to high performance and scientific codes. We train LLMs using code and performance data that is specific to parallel codes. We compare several recent LLMs on HPC related tasks and introduce a new model, HPCCoder, trained on parallel code. In our experiments we show that this model can autocomplete HPC functions where general models cannot, decorate for loops with OpenMP pragmas, and model performance changes in two scientific application repositories.
KoopmanBased Surrogate Models for MultiObjective Optimization of AgentBased Systems ; Agentbased models ABMs provide an intuitive and powerful framework for studying social dynamics by modeling the interactions of individuals from the perspective of each individual. In addition to simulating and forecasting the dynamics of ABMs, the demand to solve optimization problems to support, for example, decisionmaking processes naturally arises. Most ABMs, however, are nondeterministic, highdimensional dynamical systems, so objectives defined in terms of their behavior are computationally expensive. In particular, if the number of agents is large, evaluating the objective functions often becomes prohibitively timeconsuming. We consider datadriven reduced models based on the Koopman generator to enable the efficient solution of multiobjective optimization problems involving ABMs. In a first step, we show how to obtain datadriven reduced models of nondeterministic dynamical systems such as ABMs that depend on potentially nonlinear control inputs. We then use them in the second step as surrogate models to solve multiobjective optimal control problems. We first illustrate our approach using the example of a voter model, where we compute optimal controls to steer the agents to a predetermined majority, and then using the example of an epidemic ABM, where we compute optimal containment strategies in a prototypical situation. We demonstrate that the surrogate models effectively approximate the Paretooptimal points of the ABM dynamics by comparing the surrogatebased results with test points, where the objectives are evaluated using the ABM. Our results show that when objectives are defined by the dynamic behavior of ABMs, datadriven surrogate models support or even enable the solution of multiobjective optimization problems.
EHRSHOT An EHR Benchmark for FewShot Evaluation of Foundation Models ; While the general machine learning ML community has benefited from public datasets, tasks, and models, the progress of ML in healthcare has been hampered by a lack of such shared assets. The success of foundation models creates new challenges for healthcare ML by requiring access to shared pretrained models to validate performance benefits. We help address these challenges through three contributions. First, we publish a new dataset, EHRSHOT, containing deidentified structured data from the electronic health records EHRs of 6,712 patients from Stanford Medicine. Unlike MIMICIIIIV and other popular EHR datasets, EHRSHOT is longitudinal and not restricted to ICUED patients. Second, we publish the weights of a 141M parameter clinical foundation model pretrained on the structured EHR data of 2.57M patients. We are one of the first to fully release such a model for coded EHR data; in contrast, most prior models released for clinical data e.g. GatorTron, ClinicalBERT only work with unstructured text and cannot process the rich, structured data within an EHR. We provide an endtoend pipeline for the community to validate and build upon its performance. Third, we define 15 fewshot clinical prediction tasks, enabling evaluation of foundation models on benefits such as sample efficiency and task adaption. The code to reproduce our results, as well as the model and dataset via a research data use agreement, are available at our Github repo here httpsgithub.comsomshahlabehrshotbenchmark
Efficient Domain Adaptation of Sentence Embeddings Using Adapters ; Sentence embeddings enable us to capture the semantic similarity of short texts. Most sentence embedding models are trained for general semantic textual similarity tasks. Therefore, to use sentence embeddings in a particular domain, the model must be adapted to it in order to achieve good results. Usually, this is done by finetuning the entire sentence embedding model for the domain of interest. While this approach yields stateoftheart results, all of the model's weights are updated during finetuning, making this method resourceintensive. Therefore, instead of finetuning entire sentence embedding models for each target domain individually, we propose to train lightweight adapters. These domainspecific adapters do not require finetuning all underlying sentence embedding model parameters. Instead, we only train a small number of additional parameters while keeping the weights of the underlying sentence embedding model fixed. Training domainspecific adapters allows always using the same base model and only exchanging the domainspecific adapters to adapt sentence embeddings to a specific domain. We show that using adapters for parameterefficient domain adaptation of sentence embeddings yields competitive performance within 1 of a domainadapted, entirely finetuned sentence embedding model while only training approximately 3.6 of the parameters.
Towards Deep Network Steganography From Networks to Networks ; With the widespread applications of the deep neural network DNN, how to covertly transmit the DNN models in public channels brings us the attention, especially for those trained for secretlearning tasks. In this paper, we propose deep network steganography for the covert communication of DNN models. Unlike the existing steganography schemes which focus on the subtle modification of the cover data to accommodate the secrets, our scheme is learning task oriented, where the learning task of the secret DNN model termed as secretlearning task is disguised into another ordinary learning task conducted in a stego DNN model termed as stegolearning task. To this end, we propose a gradientbased filter insertion scheme to insert interference filters into the important positions in the secret DNN model to form a stego DNN model. These positions are then embedded into the stego DNN model using a key by side information hiding. Finally, we activate the interference filters by a partial optimization strategy, such that the generated stego DNN model works on the stegolearning task. We conduct the experiments on both the intratask steganography and intertask steganography i.e., the secret and stegolearning tasks belong to the same and different categories, both of which demonstrate the effectiveness of our proposed method for covert communication of DNN models.
PhysicsBased Modeling and Validation of 2D Schottky Barrier FieldEffect Transistors ; In this work, we describe the charge transport in twodimensional 2D Schottky barrier fieldeffect transistors SBFETs based on the carrier injection at the Schottky contacts. We first develop a numerical model for thermionic and fieldemission processes of carrier injection that occur at a Schottky contact. The numerical model is then simplified to yield an analytic equation for current versus voltage IV in the SBFET. The lateral electric field at the junction, controlling the carrier injection, is obtained by accurately modeling the electrostatics and the tunneling barrier width. Unlike previous SBFET models that are valid for nearequilibrium conditions, this model is applicable for a broad bias range as it incorporates the pertinent physics of thermionic, thermionic fieldemission, and fieldemission processes from a 3D metal into a 2D semiconductor. The IV model is validated against the measurement data of 2, 3, and 4layer ambipolar MoTe2 SBFETs fabricated in our lab, as well as the published data of unipolar 2D SBFETs using MoS2. Finally, the model's physics is tested rigorously by comparing modelgenerated data against TCAD simulation data.
Instruction Mining HighQuality Instruction Data Selection for Large Language Models ; Large language models typically undergo two training stages, pretraining and finetuning. Despite that largescale pretraining endows the model with strong capabilities to generate natural language responses, these pretrained models can still fail to understand human instructions at times. To enhance language models' ability of interpreting and responding to instructions, instruction finetuning has emerged as a critical method in this area. Recent studies found that large language models can be finetuned to perform well even with a small amount of highquality instructionfollowing data. However, the selection of highquality datasets for finetuning language models still lacks clear guidelines to follow. In this paper, we propose InstructMining, a linear rule for evaluating instructionfollowing data quality. We formulate InstructMining using specific natural language indicators. To investigate the relationship between data quality and these indicators, we further conduct extensive finetuning experiments. The experiment results are then applied to estimating parameters in InstructMining. To further investigate its performance, we use InstructMining to select highquality data from unseen datasets. Results demonstrate that InstructMining can help select relatively highquality samples from various instructionfollowing datasets. Compared to models finetuned on unfiltered datasets, models finetuned on InstructMining selected datasets perform better on 42.5 cases.
Building a digital twin of EDFA a greybox modeling approach ; To enable intelligent and selfdriving optical networks, highaccuracy physical layer models are required. The dynamic wavelengthdependent gain effects of nonconstantpump erbiumdoped fiber amplifiers EDFAs remain a crucial problem in terms of modeling, as it determines opticaltosignal noise ratio as well as the magnitude of fiber nonlinearities. Blackbox datadriven models have been widely studied, but it requires a large size of data for training and suffers from poor generalizability. In this paper, we derive the gain spectra of EDFAs as a simple univariable linear function, and then based on it we propose a greybox EDFA gain modeling scheme. Experimental results show that for both automatic gain control AGC and automatic power control APC EDFAs, our model built with 8 data samples can achieve better performance than the neural network NN based model built with 900 data samples, which means the required data size for modeling can be reduced by at least two orders of magnitude. Moreover, in the experiment the proposed model demonstrates superior generalizability to unseen scenarios since it is based on the underlying physics of EDFAs. The results indicate that building a customized digital twin of each EDFA in optical networks become feasible, which is essential especially for next generation multiband network operations.
ExposureDiffusion Learning to Expose for Lowlight Image Enhancement ; Previous raw imagebased lowlight image enhancement methods predominantly relied on feedforward neural networks to learn deterministic mappings from lowlight to normallyexposed images. However, they failed to capture critical distribution information, leading to visually undesirable results. This work addresses the issue by seamlessly integrating a diffusion model with a physicsbased exposure model. Different from a vanilla diffusion model that has to perform Gaussian denoising, with the injected physicsbased exposure model, our restoration process can directly start from a noisy image instead of pure noise. As such, our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models. To make full use of the advantages of different intermediate steps, we further propose an adaptive residual layer that effectively screens out the sideeffect in the iterative refinement when the intermediate results have been already wellexposed. The proposed framework can work with both realpaired datasets, SOTA noise models, and different backbone networks. Note that, the proposed framework is compatible with realpaired datasets, realsynthetic noise models, and different backbone networks. We evaluate the proposed method on various public benchmarks, achieving promising results with consistent improvements using different exposure models and backbones. Besides, the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted.
Looking deeper into interpretable deep learning in neuroimaging a comprehensive survey ; Deep learning DL models have been popular due to their ability to learn directly from the raw data in an endtoend paradigm, alleviating the concern of a separate errorprone feature extraction phase. Recent DLbased neuroimaging studies have also witnessed a noticeable performance advancement over traditional machine learning algorithms. But the challenges of deep learning models still exist because of the lack of transparency in these models for their successful deployment in realworld applications. In recent years, Explainable AI XAI has undergone a surge of developments mainly to get intuitions of how the models reached the decisions, which is essential for safetycritical domains such as healthcare, finance, and law enforcement agencies. While the interpretability domain is advancing noticeably, researchers are still unclear about what aspect of model learning a post hoc method reveals and how to validate its reliability. This paper comprehensively reviews interpretable deep learning models in the neuroimaging domain. Firstly, we summarize the current status of interpretability resources in general, focusing on the progression of methods, associated challenges, and opinions. Secondly, we discuss how multiple recent neuroimaging studies leveraged model interpretability to capture anatomical and functional brain alterations most relevant to model predictions. Finally, we discuss the limitations of the current practices and offer some valuable insights and guidance on how we can steer our future research directions to make deep learning models substantially interpretable and thus advance scientific understanding of brain disorders.
PUMA Secure Inference of LLaMA7B in Five Minutes ; With ChatGPT as a representative, tons of companies have began to provide services based on large Transformers models. However, using such a service inevitably leak users' prompts to the model provider. Previous studies have studied secure inference for Transformer models using secure multiparty computation MPC, where model parameters and clients' prompts are kept secret. Despite this, these frameworks are still limited in terms of model performance, efficiency, and deployment. To address these limitations, we propose framework PUMA to enable fast and secure Transformer model inference. Our framework designs high quality approximations for expensive functions such as GeLU and softmax, and significantly reduce the cost of secure inference while preserving the model performance. Additionally, we design secure Embedding and LayerNorm procedures that faithfully implement the desired functionality without undermining the Transformer architecture. PUMA is about 2times faster than the stateoftheart framework MPCFORMERICLR 2023 and has similar accuracy as plaintext models without finetuning which the previous works failed to achieve. PUMA can even evaluate LLaMA7B in around 5 minutes to generate 1 token. To our best knowledge, this is the first time that a model with such a parameter size is able to be evaluated under MPC. PUMA has been opensourced in the Github repository of SecretFlowSPU.
How to Scale Your EMA ; Preserving training dynamics across batch sizes is an important tool for practical machine learning as it enables the tradeoff between batch size and wallclock time. This tradeoff is typically enabled by a scaling rule, for example, in stochastic gradient descent, one should scale the learning rate linearly with the batch size. Another important tool for practical machine learning is the model Exponential Moving Average EMA, which is a model copy that does not receive gradient information, but instead follows its target model with some momentum. This model EMA can improve the robustness and generalization properties of supervised learning, stabilize pseudolabeling, and provide a learning signal for SelfSupervised Learning SSL. Prior works have treated the model EMA separately from optimization, leading to different training dynamics across batch sizes and lower model performance. In this work, we provide a scaling rule for optimization in the presence of model EMAs and demonstrate its validity across a range of architectures, optimizers, and data modalities. We also show the rule's validity where the model EMA contributes to the optimization of the target model, enabling us to train EMAbased pseudolabeling and SSL methods at small and large batch sizes. For SSL, we enable training of BYOL up to batch size 24,576 without sacrificing performance, optimally a 6times wallclock time reduction.
Scaling Relationship on Learning Mathematical Reasoning with Large Language Models ; Mathematical reasoning is a challenging task for large language models LLMs, while the scaling relationship of it with respect to LLM capacity is underexplored. In this paper, we investigate how the pretraining loss, supervised data amount, and augmented data amount influence the reasoning performances of a supervised LLM. We find that pretraining loss is a better indicator of the model's performance than the model's parameter count. We apply supervised finetuning SFT with different amounts of supervised data and empirically find a loglinear relation between data amount and model performance, and we find better models improve less with enlarged supervised datasets. To augment more data samples for improving model performances without any human effort, we propose to apply Rejection sampling FineTuning RFT. RFT uses supervised models to generate and collect correct reasoning paths as augmented finetuning datasets. We find with augmented samples containing more distinct reasoning paths, RFT improves mathematical reasoning performance more for LLMs. We also find RFT brings more improvement for less performant LLMs. Furthermore, we combine rejection samples from multiple models which push LLaMA7B to an accuracy of 49.3 on GSM8K which outperforms the supervised finetuning SFT accuracy of 35.9 significantly.
Integrating large language models and active inference to understand eye movements in reading and dyslexia ; We present a novel computational model employing hierarchical active inference to simulate reading and eye movements. The model characterizes linguistic processing as inference over a hierarchical generative model, facilitating predictions and inferences at various levels of granularity, from syllables to sentences. Our approach combines the strengths of large language models for realistic textual predictions and active inference for guiding eye movements to informative textual information, enabling the testing of predictions. The model exhibits proficiency in reading both known and unknown words and sentences, adhering to the distinction between lexical and nonlexical routes in dualroute theories of reading. Notably, our model permits the exploration of maladaptive inference effects on eye movements during reading, such as in dyslexia. To simulate this condition, we attenuate the contribution of priors during the reading process, leading to incorrect inferences and a more fragmented reading style, characterized by a greater number of shorter saccades. This alignment with empirical findings regarding eye movements in dyslexic individuals highlights the model's potential to aid in understanding the cognitive processes underlying reading and eye movements, as well as how reading deficits associated with dyslexia may emerge from maladaptive predictive processing. In summary, our model represents a significant advancement in comprehending the intricate cognitive processes involved in reading and eye movements, with potential implications for understanding and addressing dyslexia through the simulation of maladaptive inference. It may offer valuable insights into this condition and contribute to the development of more effective interventions for treatment.
Introducing Hybrid Modeling with TimeseriesTransformers A Comparative Study of Series and Parallel Approach in Batch Crystallization ; Most existing digital twins rely on datadriven blackbox models, predominantly using deep neural recurrent, and convolutional neural networks DNNs, RNNs, and CNNs to capture the dynamics of chemical systems. However, these models have not seen the light of day, given the hesitance of directly deploying a blackbox tool in practice due to safety and operational issues. To tackle this conundrum, hybrid models combining firstprinciples physicsbased dynamics with machine learning ML models have increased in popularity as they are considered a 'best of both worlds' approach. That said, existing simple DNN models are not adept at longterm timeseries predictions and utilizing contextual information on the trajectory of the process dynamics. Recently, attentionbased timeseries transformers TSTs that leverage multiheaded attention mechanism and positional encoding to capture longterm and shortterm changes in process states have shown high predictive performance. Thus, a firstofakind, TSTbased hybrid framework has been developed for batch crystallization, demonstrating improved accuracy and interpretability compared to traditional blackbox models. Specifically, two different configurations i.e., series and parallel of TSTbased hybrid models are constructed and compared, which show a normalizedmeansquareerror NMSE in the range of 10, 50times104 and an R2 value over 0.99. Given the growing adoption of digital twins, nextgeneration attentionbased hybrid models are expected to play a crucial role in shaping the future of chemical manufacturing.
Probabilistic Neural Transfer Function Estimation with Bayesian System Identification ; Neural population responses in sensory systems are driven by external physical stimuli. This stimulusresponse relationship is typically characterized by receptive fields, which have been estimated by neural system identification approaches. Such models usually requires a large amount of training data, yet, the recording time for animal experiments is limited, giving rise to epistemic uncertainty for the learned neural transfer functions. While deep neural network models have demonstrated excellent power on neural prediction, they usually do not provide the uncertainty of the resulting neural representations and derived statistics, such as the stimuli driving neurons optimally, from in silico experiments. Here, we present a Bayesian system identification approach to predict neural responses to visual stimuli, and explore whether explicitly modeling network weight variability can be beneficial for identifying neural response properties. To this end, we use variational inference to estimate the posterior distribution of each model weight given the training data. Tests with different neural datasets demonstrate that this method can achieve higher or comparable performance on neural prediction, with a much higher data efficiency compared to Monte Carlo dropout methods and traditional models using point estimates of the model parameters. Furthermore, our approach enables to identify response properties with credible intervals and perform statistical test for the learned neural features, which avoid the idiosyncrasy of a single model. Finally, in silico experiments show that our model generates stimuli driving neuronal activity significantly better than traditional models, particularly in the limiteddata regime.
Broadband multiwavelength study of LHAASO detected AGN ; Recently, the Large High Altitude Air Shower Observatory LHAASO collaboration presents the first catalog of gammaray sources using 508 days LHAASO data from March 2021 to September 2022. This catalog contains five active galactic nuclei AGNs, of which four are blazars and one is a linertype AGN. In this work, we establish averaged multiwavelength SEDs by combining data from FermiLarge Area Telescope, Swift, ZTF, and WISE with the same period as the LHAASO detection. In general, these five AGNs are found in low states at all wavelengths. To study the multiwavelength properties of these AGNs, several jet emission models, including the onezone leptonic model, the onezone leptonic and hadronuclear pp model, the onezone protonsynchrotron model, and the spinelayer model are applied to reproduce their averaged SEDs, respectively. We find that the onezone leptonic model can reproduce most of the SEDs, except for the highenergy tail of LHAASO spectra. To improve the fitting, emission from pp interactions is favoured in the framework of onezone model. The spinelayer model, which can be treated as a multizone scenario, also can provide good spectra fits. The influence of different extragalactic background light models on fitting LHAASO energy spectrum is also discussed.
A Deep Dive into the Connections Between the Renormalization Group and Deep Learning in the Ising Model ; The renormalization group RG is an essential technique in statistical physics and quantum field theory, which considers scaleinvariant properties of physical theories and how these theories' parameters change with scaling. Deep learning is a powerful computational technique that uses multilayered neural networks to solve a myriad of complicated problems. Previous research suggests the possibility that unsupervised deep learning may be a form of RG flow, by being a layerbylayer coarse graining of the original data. We examined this connection on a more rigorous basis for the simple example of Kadanoff block renormalization of the 2D nearestneighbor Ising model, with our deep learning accomplished via Restricted Boltzmann Machines RBMs. We developed extensive renormalization techniques for the 1D and 2D Ising model to provide a baseline for comparison. For the 1D Ising model, we successfully used Adam optimization on a correlation length loss function to learn the group flow, yielding results consistent with the analytical model for infinite N. For the 2D Ising model, we successfully generated Ising model samples using the Wolff algorithm, and performed the group flow using a quasideterministic method, validating these results by calculating the critical exponent nu. We then examined RBM learning of the Ising model layer by layer, finding a blocking structure in the learning that is qualitatively similar to RG. Lastly, we directly compared the weights of each layer from the learning to Ising spin renormalization, but found quantitative inconsistencies for the simple case of nearestneighbor Ising models.
Identification and validation of periodic autoregressive model with additive noise finitevariance case ; In this paper, we address the problem of modeling data with periodic autoregressive PAR time series and additive noise. In most cases, the data are processed assuming a noisefree model i.e., without additive noise, which is not a realistic assumption in real life. The first two steps in PAR model identification are order selection and period estimation, so the main focus is on these issues. Finally, the model should be validated, so a procedure for analyzing the residuals, which are considered here as multidimensional vectors, is proposed. Both order and period selection, as well as model validation, are addressed by using the characteristic function CF of the residual series. The CF is used to obtain the probability density function, which is utilized in the information criterion and for residuals distribution testing. To complete the PAR model analysis, the procedure for estimating the coefficients is necessary. However, this issue is only mentioned here as it is a separate task under consideration in parallel. The presented methodology can be considered as the general framework for analyzing data with periodically nonstationary characteristics disturbed by finitevariance external noise. The original contribution is in the selection of the optimal model order and period identification, as well as the analysis of residuals. All these findings have been inspired by our previous work on machine condition monitoring that used PAR modeling
Towards an Ondevice Agent for Text Rewriting ; Large Language Models LLMs have demonstrated impressive capabilities for text rewriting. Nonetheless, the large sizes of these models make them impractical for ondevice inference, which would otherwise allow for enhanced privacy and economical inference. Creating a smaller yet potent language model for text rewriting presents a formidable challenge because it requires balancing the need for a small size with the need to retain the emergent capabilities of the LLM, that requires costly data collection. To address the above challenge, we introduce a new instruction tuning approach for building a mobilecentric text rewriting model. Our strategies enable the generation of high quality training data without any human labeling. In addition, we propose a heuristic reinforcement learning framework which substantially enhances performance without requiring preference data. To further bridge the performance gap with the larger serverside model, we propose an effective approach that combines the mobile rewrite agent with the server model using a cascade. To tailor the text rewriting tasks to mobile scenarios, we introduce MessageRewriteEval, a benchmark that focuses on text rewriting for messages through natural language instructions. Through empirical experiments, we demonstrate that our ondevice model surpasses the current stateoftheart LLMs in text rewriting while maintaining a significantly reduced model size. Notably, we show that our proposed cascading approach improves model performance.
Continuation of fixed points and bifurcations from ODE to flowkick disturbance models ; Some ODE models treat ecological disturbance as a continuous process, even disturbances such as fire that occur almost instantaneously on the timescale of system recovery. Alternatively, flowkick models resolve disturbances as discrete impulses that change an ODE system's state periodically in time. Here we compare the dynamics of continuously disturbed ODE models to those of flowkick models with the same average disturbance rate. In the case that kicks are small and highfrequency, we find multiple similarities between continuous and analogous discrete disturbance models. First, we prove that flowkick maps generate an analogous vector field in the limit as the period between kicks approaches zero. Second, we present conditions under which equilibria, saddlenode bifurcations, and transcritical bifurcations continue from ODE to flowkick systems. On the other hand, we also provide numerical evidence that similarities between continuous and discrete disturbance models can break down as the period between kick grows. We illustrate implications of these differences for climate change in a nonspatial Klausmeier model of vegetation and precipitation dynamics. We conclude that although ODEs may suffice to model highfrequency disturbances, resolving lowerfrequency disturbances in time may be essential to effectively predicting their effects.
Double Probability Integral Transform Residuals for Regression Models with Discrete Outcomes ; The assessment of regression models with discrete outcomes is challenging and has many fundamental issues. With discrete outcomes, standard regression model assessment tools such as Pearson and deviance residuals do not follow the conventional reference distribution normal under the true model, calling into question the legitimacy of model assessment based on these tools. To fill this gap, we construct a new type of residuals for general discrete outcomes, including ordinal and count outcomes. The proposed residuals are based on two layers of probability integral transformation. When at least one continuous covariate is available, the proposed residuals closely follow a uniform distribution a normal distribution after transformation under the correctly specified model. One can construct visualizations such as QQ plots to check the overall fit of a model straightforwardly, and the shape of QQ plots can further help identify possible causes of misspecification such as overdispersion. We provide theoretical justification for the proposed residuals by establishing their asymptotic properties. Moreover, in order to assess the mean structure and identify potential covariates, we develop an ordered curve as a supplementary tool, which is based on the comparison between the partial sum of outcomes and of fitted means. Through simulation, we demonstrate empirically that the proposed tools outperform commonly used residuals for various model assessment tasks. We also illustrate the workflow of model assessment using the proposed tools in data analysis.
Neural Video Compression with Temporal LayerAdaptive Hierarchical Bframe Coding ; Neural video compression NVC is a rapidly evolving video coding research area, with some models achieving superior coding efficiency compared to the latest video coding standard Versatile Video Coding VVC. In conventional video coding standards, the hierarchical Bframe coding, which utilizes a bidirectional prediction structure for higher compression, had been wellstudied and exploited. In NVC, however, limited research has investigated the hierarchical B scheme. In this paper, we propose an NVC model exploiting hierarchical Bframe coding with temporal layeradaptive optimization. We first extend an existing unidirectional NVC model to a bidirectional model, which achieves 21.13 BDrate gain over the unidirectional baseline model. However, this model faces challenges when applied to sequences with complex or large motions, leading to performance degradation. To address this, we introduce temporal layeradaptive optimization, incorporating methods such as temporal layeradaptive quality scaling TAQS and temporal layeradaptive latent scaling TALS. The final model with the proposed methods achieves an impressive BDrate gain of 39.86 against the baseline. It also resolves the challenges in sequences with large or complex motions with up to 49.13 more BDrate gains than the simple bidirectional extension. This improvement is attributed to the allocation of more bits to lower temporal layers, thereby enhancing overall reconstruction quality with smaller bits. Since our method has little dependency on a specific NVC model architecture, it can serve as a general tool for extending unidirectional NVC models to the ones with hierarchical Bframe coding.
Advanced Deep Regression Models for Forecasting Time Series Oil Production ; Global oil demand is rapidly increasing and is expected to reach 106.3 million barrels per day by 2040. Thus, it is vital for hydrocarbon extraction industries to forecast their production to optimize their operations and avoid losses. Big companies have realized that exploiting the power of deep learning DL and the massive amount of data from various oil wells for this purpose can save a lot of operational costs and reduce unwanted environmental impacts. In this direction, researchers have proposed models using conventional machine learning ML techniques for oil production forecasting. However, these techniques are inappropriate for this problem as they can not capture historical patterns found in time series data, resulting in inaccurate predictions. This research aims to overcome these issues by developing advanced datadriven regression models using sequential convolutions and long shortterm memory LSTM units. Exhaustive analyses are conducted to select the optimal sequence length, model hyperparameters, and crosswell dataset formation to build highly generalized robust models. A comprehensive experimental study on Volve oilfield data validates the proposed models. It reveals that the LSTMbased sequence learning model can predict oil production better than the 1D convolutional neural network CNN with mean absolute error MAE and R2 score of 111.16 and 0.98, respectively. It is also found that the LSTMbased model performs better than all the existing stateoftheart solutions and achieves a 37 improvement compared to a standard linear regression, which is considered the baseline model in this work.
Accelerating LSTMbased HighRate Dynamic System Models ; In this paper, we evaluate the use of a trained Long ShortTerm Memory LSTM network as a surrogate for a EulerBernoulli beam model, and then we describe and characterize an FPGAbased deployment of the model for use in realtime structural health monitoring applications. The focus of our efforts is the DROPBEAR Dynamic Reproduction of Projectiles in Ballistic Environments for Advanced Research dataset, which was generated as a benchmark for the study of realtime structural modeling applications. The purpose of DROPBEAR is to evaluate models that take vibration data as input and give the initial conditions of the cantilever beam on which the measurements were taken as output. DROPBEAR is meant to serve an exemplar for emerging highrate active structures that can be actively controlled with feedback latencies of less than one microsecond. Although the EulerBernoulli beam model is a wellknown solution to this modeling problem, its computational cost is prohibitive for the time scales of interest. It has been previously shown that a properly structured LSTM network can achieve comparable accuracy with less workload, but achieving submicrosecond model latency remains a challenge. Our approach is to deploy the LSTM optimized specifically for latency on FPGA. We designed the model using both highlevel synthesis HLS and hardware description language HDL. The lowest latency of 1.42 muS and the highest throughput of 7.87 Gopss were achieved on Alveo U55C platform for HDL design.
Uncertainty in AI Evaluating Deep Neural Networks on OutofDistribution Images ; As AI models are increasingly deployed in critical applications, ensuring the consistent performance of models when exposed to unusual situations such as outofdistribution OOD or perturbed data, is important. Therefore, this paper investigates the uncertainty of various deep neural networks, including ResNet50, VGG16, DenseNet121, AlexNet, and GoogleNet, when dealing with such data. Our approach includes three experiments. First, we used the pretrained models to classify OOD images generated via DALLE to assess their performance. Second, we built an ensemble from the models' predictions using probabilistic averaging for consensus due to its advantages over plurality or majority voting. The ensemble's uncertainty was quantified using average probabilities, variance, and entropy metrics. Our results showed that while ResNet50 was the most accurate single model for OOD images, the ensemble performed even better, correctly classifying all images. Third, we tested model robustness by adding perturbations filters, rotations, etc. to new epistemic images from DALLE or realworld captures. ResNet50 was chosen for this being the best performing model. While it classified 4 out of 5 unperturbed images correctly, it misclassified all of them postperturbation, indicating a significant vulnerability. These misclassifications, which are clear to human observers, highlight AI models' limitations. Using saliency maps, we identified regions of the images that the model considered important for their decisions.
Priority Queue Formulation of AgentBased Bathtub Model for Network Trip Flows in the Relative Space ; Agentbased models have been extensively used to simulate the behavior of travelers in transportation systems because they allow for realistic and versatile modeling of interactions. However, traditional agentbased models suffer from high computational costs and rely on tracking physical locations, raising privacy concerns. This paper proposes an efficient formulation for the agentbased bathtub model AB2M in the relative space, where each agent's trajectory is represented by a time series of the remaining distance to its destination. The AB2M can be understood as a microscopic model that tracks individual trips' initiation, progression, and completion and is an exact numerical solution of the bathtub model for generic timedependent trip distance distributions. The model can be solved for a deterministic set of trips with a given demand pattern defined by the start time of each trip and its distance, or it can be used to run Monte Carlo simulations to capture the average behavior and variation stochastic demand patterns, described by probabilistic distributions of trip distances and departure times. To enhance the computational efficiency, we introduce a priority queue formulation, eliminating the need to update trip positions at each time step and allowing us to run largescale scenarios with millions of individual trips in seconds. We systematically explore the scaling properties and discuss the introduction of biases and numerical errors. The systematic exploration of scaling properties of the modeling of individual agents in the relative space with the AB2M further enhances its applicability to largescale transportation systems and opens up opportunities for studying travel time reliability, scheduling, and mode choices.
Inferring effective couplings with Restricted Boltzmann Machines ; Generative models offer a direct way to model complex data. Among them, energybased models provide us with a neural network model that aims to accurately reproduce all statistical correlations observed in the data at the level of the Boltzmann weight of the model. However, one challenge is to understand the physical interpretation of such models. In this study, we propose a simple solution by implementing a direct mapping between the energy function of the Restricted Boltzmann Machine and an effective Ising spin Hamiltonian that includes highorder interactions between spins. This mapping includes interactions of all possible orders, going beyond the conventional pairwise interactions typically considered in the inverse Ising approach, and allowing the description of complex datasets. Earlier works attempted to achieve this goal, but the proposed mappings did not do properly treat the complexity of the problem or did not contain direct prescriptions for practical application. To validate our method, we performed several controlled numerical experiments where we trained the RBMs using equilibrium samples of predefined models containing local external fields, twobody and threebody interactions in various lowdimensional topologies. The results demonstrate the effectiveness of our proposed approach in learning the correct interaction network and pave the way for its application in modeling interesting datasets. We also evaluate the quality of the inferred model based on different training methods.
RecurrenceFree Survival Prediction for Anal Squamous Cell Carcinoma Chemoradiotherapy using Planning CTbased Radiomics Model ; Objectives Approximately 30 of nonmetastatic anal squamous cell carcinoma ASCC patients will experience recurrence after chemoradiotherapy CRT, and currently available clinical variables are poor predictors of treatment response. We aimed to develop a model leveraging information extracted from radiation pretreatment planning CT to predict recurrencefree survival RFS in ASCC patients after CRT. Methods Radiomics features were extracted from planning CT images of 96 ASCC patients. Following prefeature selection, the optimal feature set was selected via stepforward feature selection with a multivariate Cox proportional hazard model. The RFS prediction was generated from a radiomicsclinical combined model based on an optimal feature set with five repeats of fivefold cross validation. The risk stratification ability of the proposed model was evaluated with KaplanMeier analysis. Results Shape and texturebased radiomics features significantly predicted RFS. Compared to a clinicalonly model, radiomicsclinical combined model achieves better performance in the testing cohort with higher Cindex 0.80 vs 0.73 and AUC 0.84 vs 0.79 for 1year RFS, 0.84 vs 0.78 for 2year RFS, and 0.86 vs 0.83 for 3year RFS, leading to distinctive high and lowrisk of recurrence groups p0.001. Conclusions A treatment planning CT based radiomics and clinical combined model had improved prognostic performance in predicting RFS for ASCC patients treated with CRT as compared to a model using clinical features only.
Quantized Fourier and Polynomial Features for more Expressive Tensor Network Models ; In the context of kernel machines, polynomial and Fourier features are commonly used to provide a nonlinear extension to linear models by mapping the data to a higherdimensional space. Unless one considers the dual formulation of the learning problem, which renders exact largescale learning unfeasible, the exponential increase of model parameters in the dimensionality of the data caused by their tensorproduct structure prohibits to tackle highdimensional problems. One of the possible approaches to circumvent this exponential scaling is to exploit the tensor structure present in the features by constraining the model weights to be an underparametrized tensor network. In this paper we quantize, i.e. further tensorize, polynomial and Fourier features. Based on this feature quantization we propose to quantize the associated model weights, yielding quantized models. We show that, for the same number of model parameters, the resulting quantized models have a higher bound on the VCdimension as opposed to their nonquantized counterparts, at no additional computational cost while learning from identical features. We verify experimentally how this additional tensorization regularizes the learning problem by prioritizing the most salient features in the data and how it provides models with increased generalization capabilities. We finally benchmark our approach on large regression task, achieving stateoftheart results on a laptop computer.
On the uses and abuses of regression models a call for reform of statistical practice and teaching ; When students and users of statistical methods first learn about regression analysis there is an emphasis on the technical details of models and estimation methods that invariably runs ahead of the purposes for which these models might be used. More broadly, statistics is widely understood to provide a body of techniques for modelling data, underpinned by what we describe as the true model myth, according to which the task of the statisticiandata analyst is to build a model that closely approximates the true data generating process. By way of our own historical examples and a brief review of mainstream clinical research journals, we describe how this perspective leads to a range of problems in the application of regression methods, including misguided adjustment for covariates, misinterpretation of regression coefficients and the widespread fitting of regression models without a clear purpose. We then outline an alternative approach to the teaching and application of regression methods, which begins by focussing on clear definition of the substantive research question within one of three distinct types descriptive, predictive, or causal. The simple univariable regression model may be introduced as a tool for description, while the development and application of multivariable regression models should proceed differently according to the type of question. Regression methods will no doubt remain central to statistical practice as they provide a powerful tool for representing variation in a response or outcome variable as a function of input variables, but their conceptualisation and usage should follow from the purpose at hand.
Current Observation constraints on Hybrid potential scalar field cosmological model in Lyra Geometry ; In the current study, we investigate a scalar field cosmological model with Lyra's geometry to explain the present cosmic expansion in a homogeneous and isotropic flat FRW universe. In Einstein's field equations, we presupposed a variable displacement vector as an element of Lyra's geometry. In the context of the conventional theory of gravity, we suggest a suitable parameterization of the scalar field's dark energy density in the hyperbolic function of redshift z, confirming the essential transition behavior of the universe from a decelerating era to the present accelerated scenario. We present constraints on model parameters using the most recent observational data sets from OHD, BAOCMB, and Pantheon, taking Markov Chain Monte Carlo MCMC analysis into account. For the proposed model, the best estimated values of parameters for the combined dataset OHD, BAOCMB, and Pantheon are H0 71.15pm 0.26 kmsMpc, Omegam00.2625pm 0.0024, Omegaphi0 0.676pm0.038, alpha0.22pm0.13, n 0.096pm0.079, and k 0.38pm0.32. The model exhibits a flipping nature, and the redshift transition occurs at zt 0.7560.0050.015. The current value of the decelerated parameter for the proposed model is calculated as q0 0.6250.0670.085 for the combined dataset. Some dynamical properties of the model like energy density rhophi, scalar field pressure pphi, EoS parameter of scalar field omegaphi, and effective EoS parameter omegaeff are analyzed and presented. Further, we have also examined the statefinder diagnosis and jerk parameters of the derived model. The total density parameter for the derived model is found to be 1 which is in great agreement with recent standard findings.
A Paradigm Shift in Machine Translation Boosting Translation Performance of Large Language Models ; Generative Large Language Models LLMs have achieved remarkable advancements in various NLP tasks. However, these advances have not been reflected in the translation task, especially those with moderate model sizes i.e., 7B or 13B parameters, which still lag behind conventional supervised encoderdecoder translation models. Previous studies have attempted to improve the translation capabilities of these moderate LLMs, but their gains have been limited. In this study, we propose a novel finetuning approach for LLMs that is specifically designed for the translation task, eliminating the need for the abundant parallel data that traditional translation models usually depend on. Our approach consists of two finetuning stages initial finetuning on monolingual data followed by subsequent finetuning on a small set of highquality parallel data. We introduce the LLM developed through this strategy as Advanced Language Modelbased trAnslator ALMA. Based on LLaMA2 as our underlying model, our results show that the model can achieve an average improvement of more than 12 BLEU and 12 COMET over its zeroshot performance across 10 translation directions from the WMT'21 2 directions and WMT'22 8 directions test datasets. The performance is significantly better than all prior work and even superior to the NLLB54B model and GPT3.5textdavinci003, with only 7B or 13B parameters. This method establishes the foundation for a novel training paradigm in machine translation.
Regionally Additive Models Explainablebydesign models minimizing feature interactions ; Generalized Additive Models GAMs are widely used explainablebydesign models in various applications. GAMs assume that the output can be represented as a sum of univariate functions, referred to as components. However, this assumption fails in ML problems where the output depends on multiple features simultaneously. In these cases, GAMs fail to capture the interaction terms of the underlying function, leading to subpar accuracy. To partially address this issue, we propose Regionally Additive Models RAMs, a novel class of explainablebydesign models. RAMs identify subregions within the feature space where interactions are minimized. Within these regions, it is more accurate to express the output as a sum of univariate functions components. Consequently, RAMs fit one component per subregion of each feature instead of one component per feature. This approach yields a more expressive model compared to GAMs while retaining interpretability. The RAM framework consists of three steps. Firstly, we train a blackbox model. Secondly, using Regional Effect Plots, we identify subregions where the blackbox model exhibits nearlocal additivity. Lastly, we fit a GAM component for each identified subregion. We validate the effectiveness of RAMs through experiments on both synthetic and realworld datasets. The results confirm that RAMs offer improved expressiveness compared to GAMs while maintaining interpretability.
JWST early Universe observations and CDM cosmology ; Deep space observations of the James Webb Space Telescope JWST have revealed that the structure and masses of very early Universe galaxies at high redshifts z15, existing at 0.3 Gyr after the BigBang, maybe as evolved as the galaxies in existence for 10 Gyr. The JWST findings are thus in strong tension with the LambdaCDM cosmological model. While tired light TL models have been shown to comply with the JWST angular galaxy size data, they cannot satisfactorily explain isotropy of the cosmic microwave background CMB observations or fit the supernovae distance modulus vs. redshift data well. We have developed hybrid models that include the tired light concept in the expanding universe. The hybrid LambdaCDM model fits the supernovae type 1a data well but not the JWST observations. We present a model with covarying coupling constants CCC, starting from the modified FLRW metric and resulting Einstein and Friedmann equations, and a CCCTL hybrid model. They fit the Pantheon data admirably, and the CCCTL model is compliant with the JWST observations. It stretches the age of the universe to 26.7 Gyr with 5.8 Gyr at z10 and 3.5 Gyr at z20, giving enough time to form massive galaxies. It thus resolves the 'impossible early galaxy' problem without requiring the existence of primordial black hole seeds or modified power spectrum, rapid formation of massive population III stars, and super Eddington accretion rates. One could infer the CCC model as an extension of the LambdaCDM model with a dynamic cosmological constant.
Data Upcycling Knowledge Distillation for Image SuperResolution ; Knowledge distillation KD emerges as a challenging yet promising technique for compressing deep learning models, characterized by the transmission of extensive learning representations from proficient and computationally intensive teacher models to compact student models. However, only a handful of studies have endeavored to compress the models for single image superresolution SISR through KD, with their effects on student model enhancement remaining marginal. In this paper, we put forth an approach from the perspective of efficient data utilization, namely, the Data Upcycling Knowledge Distillation DUKD which facilitates the student model by the prior knowledge teacher provided via upcycled indomain data derived from their inputs. This upcycling process is realized through two efficient image zooming operations and invertible data augmentations which introduce the label consistency regularization to the field of KD for SISR and substantially boosts student model's generalization. The DUKD, due to its versatility, can be applied across a broad spectrum of teacherstudent architectures. Comprehensive experiments across diverse benchmarks demonstrate that our proposed DUKD method significantly outperforms previous art, exemplified by an increase of up to 0.5dB in PSNR over baselines methods, and a 67 parameters reduced RCAN model's performance remaining on par with that of the RCAN teacher model.
Sensitivity Analysis of SimulationBased Inference for Galaxy Clustering ; Simulationbased inference SBI is a promising approach to leverage high fidelity cosmological simulations and extract information from the nonGaussian, nonlinear scales that cannot be modeled analytically. However, scaling SBI to the next generation of cosmological surveys faces the computational challenge of requiring a large number of accurate simulations over a wide range of cosmologies, while simultaneously encompassing large cosmological volumes at high resolution. This challenge can potentially be mitigated by balancing the accuracy and computational cost for different components of the the forward model while ensuring robust inference. To guide our steps in this, we perform a sensitivity analysis of SBI for galaxy clustering on various components of the cosmological simulations gravity model, halofinder and the galaxyhalo distribution models halooccupation distribution, HOD. We infer the sigma8 and Omegam using galaxy power spectrum multipoles and the bispectrum monopole assuming a galaxy number density expected from the luminous red galaxies observed using the Dark Energy Spectroscopy Instrument DESI. We find that SBI is insensitive to changing gravity model between Nbody simulations and particle mesh PM simulations. However, changing the halofinder from friendsoffriends FoF to Rockstar can lead to biased estimate of sigma8 based on the bispectrum. For galaxy models, training SBI on more complex HOD leads to consistent inference for less complex HOD models, but SBI trained on simpler HOD models fails when applied to analyze data from a more complex HOD model. Based on our results, we discuss the outlook on cosmological simulations with a focus on applying SBI approaches to future galaxy surveys.
Solar Wind with Field Lines and Energetic Particles SOFIE Model Application to Historical Solar Energetic Particle Events ; In this paper, we demonstrate the applicability of the datadriven and selfconsistent solar energetic particle model, Solarwind with FIeldlines and Energeticparticles SOFIE, to simulate acceleration and transport processes of solar energetic particles. SOFIE model is built upon the Space Weather Modeling Framework SWMF developed at the University of Michigan. In SOFIE, the background solar wind plasma in the solar corona and interplanetary space is calculated by the Aflv'en Wave Solaratmosphere ModelRealtime AWSoMR driven by the nearrealtime hourly updated Global Oscillation Network Group GONG solar magnetograms. In the background solar wind, coronal mass ejections CMEs are launched by placing an imbalanced magnetic flux rope on top of the parent active region, using the Eruptive Event Generator using GibsonLow model EEGGL. The acceleration and transport processes are modeled by the MultipleFieldLine Advection Model for Particle Acceleration MFLAMPA. In this work, nine solar energetic particle events Solar Heliospheric and INterplanetary Environment SHINE challengecampaign events are modeled. The three modules in SOFIE are validated and evaluated by comparing with observations, including the steadystate background solar wind properties, the whitelight image of the CME, and the flux of solar energetic protons, at energies of 10 MeV.
nnSAM Plugandplay Segment Anything Model Improves nnUNet Performance ; The recent developments of foundation models in computer vision, especially the Segment Anything Model SAM, allow scalable and domainagnostic image segmentation to serve as a generalpurpose segmentation tool. In parallel, the field of medical image segmentation has benefited significantly from specialized neural networks like the nnUNet, which is trained on domainspecific datasets and can automatically configure the network to tailor to specific segmentation challenges. To combine the advantages of foundation models and domainspecific models, we present nnSAM, which synergistically integrates the SAM model with the nnUNet model to achieve more accurate and robust medical image segmentation. The nnSAM model leverages the powerful and robust feature extraction capabilities of SAM, while harnessing the automatic configuration capabilities of nnUNet to promote datasettailored learning. Our comprehensive evaluation of nnSAM model on different sizes of training samples shows that it allows fewshot learning, which is highly relevant for medical image segmentation where highquality, annotated data can be scarce and costly to obtain. By melding the strengths of both its predecessors, nnSAM positions itself as a potential new benchmark in medical image segmentation, offering a tool that combines broad applicability with specialized efficiency. The code is available at httpsgithub.comKent0nLiMedicalImageSegmentation.
Semantic Code Graph an information model to facilitate software comprehension ; Software comprehension can be extremely timeconsuming due to the evergrowing size of codebases. Consequently, there is an increasing need to accelerate the code comprehension process to facilitate maintenance and reduce associated costs. A crucial aspect of this process is understanding and preserving the high quality of the code dependency structure. While a variety of code structure models already exist, there is a surprising lack of models that closely represent the source code and focus on software comprehension. As a result, there are no readily available and easytouse tools to assist with dependency comprehension, refactoring, and quality monitoring of code. To address this gap, we propose the Semantic Code Graph SCG, an information model that offers a detailed abstract representation of code dependencies with a close relationship to the source code. To validate the SCG model's usefulness in software comprehension, we compare it to nine other source code representation models. Additionally, we select 11 wellknown and widelyused opensource projects developed in Java and Scala and perform a range of software comprehension activities on them using three different code representation models the proposed SCG, the Call Graph CG, and the Class Collaboration Network CCN. We then qualitatively analyze the results to compare the performance of these models in terms of software comprehension capabilities. These activities encompass project structure comprehension, identifying critical project entities, interactive visualization of code dependencies, and uncovering code similarities through software mining. Our findings demonstrate that the SCG enhances software comprehension capabilities compared to the prevailing CCN and CG models. We believe that the work described is a step towards the next generation of tools that streamline code dependency comprehension and management.
Lectures on Modern Cosmology and Structure Formation ; Focus of these lectures is the challenge of explaining the origin of structure in the Universe. The interplay between quantum field theory and classical general relativity has given rise to several interesting cosmological models which contain mechanisms for generating density inhomogeneities. The three theories discussed here are the inflationary Universe, the cosmic string and the global texture models. The recent COBE discovery of anisotropies in the microwave background has provided some support for all three models. The present results do not allow a distinction between these models. Statistics which distinguish between the predictions of the three theories are discussed.
Generic Evolution Of Deuterium And Helium3 ; The primordial abundances of deuterium and of helium3, produced during big bang nucleosynthesis, depend sensitively on the baryon density. Thus, the observed abundances of D and he may provide useful baryometers'' provided the evolution from primordial to present or, presolar nebula abundances is understood. Inevitably, the derivation of primordial from observed abundances requires the intervention of a model for galactic evolution and, so, the inferred primordial abundances are, necessarily, model dependent. Here, an analytic framework for the evolution of D and he is presented which is generic'' in the sense that it should describe the results of any specific galactic evolution model. The effective he survival fraction'', Gamma 3, is the one free parameter which is model specific. Solar system and interstellar data are used to infer upper and lower bounds to the primordial deuterium mass fraction X2P as a function of Gamma 3 and, these bounds are used to constrain the present baryontophoton ratio eta and baryon density OmegaB. For Gamma 3 geq 14 it is found that from D and he alone 3.1 leq eta10 leq 9.0; 0.045 leq OmegaB h250 leq 0.133 where H0 50h50 km,s1 Mpc1.
Decaying Vacuum Energy and Deflationary Cosmology in Open and Closed Universes ; We consider a nonsingular deflationary cosmological model with decaying vacuum energy density in universes of arbitrary spatial curvature. Irrespective of the value of k, the models are characterized by an arbitrary time scale HI1 which determines the initial temperature of the universe and the largest value of the vacuum energy density, the slow decay of which generates all the presently observed matterenergy of the universe. If HI1 is of the order of the Planck time, the models begin with the Planck temperature and the present day value of the cosmological constant satisfies LambdaILambda0 simeq 10118 as theoretically suggested. It is also shown that all models allow a density parameter Omega0 23 and that the age of the universe is large enough to agree with observations even with the high value of H0 suggested by recent measurements.
The New Universe Fixed by a Standing Wave Particle Model ; The theoretical properties of the black holes BHs and of the universe were derived from a unified relativistic theory based on a generalization of local relativity for nonlocal cases in gravitational fields and a quantized standing wave particle model that accounts for relativity, quantum mechanics and the gravitational tests See grqc9509014. They fix an isentropic and conservative steady state that is independent on an eventual universe expansion because matter also expands itself in the same proportion. The new black holes BHs resulting from linear properties of the model, after capturing enough radiation, would explode. Statistically, matter would evolve, indefinitely, in rather closed cycles between gas and BH states, and viceversa. The expected astronomical objects and cosmic radiation backgrounds that are consistent with the observed facts. This leads to non conventional models for some celestial objects.
Prospects for Cosmology with Cluster Mass Profiles ; We test the precision with which weak lensing data can provide characteristic cluster mass profiles within Cold Dark Matter scenarios. Using a parallel treecode to simulate volumes as large as 500h1 Mpc with good resolution, we generate samples of large clusters within a standard CDM model and an open CDM model with Omegao0.3. We mock highquality lensing data by including realistic errors, selecting cluster samples based on velocity dispersion, and fitting profiles within a realistic range in radius. We find that a sample of ten clusters can determine logarithmic profile slopes with 1 sigma errors of about 7. Increasing the sample size to twenty brings this error down to less than 5, but this is still insufficient to distinguish the two models. However, measures of cluster profiles obtained with weak lensing do place strong constraints for general CDMlike models of structure formation, and we discuss the optimal strategy for obtaining data samples to use for this purpose.
On classical anisotropies in models of Open Inflation ; In the simplest model of open inflation there are two inflaton fields decoupled from each other. One of them, the tunneling field, produces a first stage of inflation which prepares the ground for the nucleation of a highly symmetric bubble. The other, a free field, drives a second period of slow roll inflation inside the bubble. However, the second field also evolves during the first stage of inflation, which to some extent breaks the needed symmetry. We show that this generates large supercurvature anisotropies which, together with the results of Tanaka and Sasaki, rule out this class of simple models unless, of course, Omega0 is sufficiently close to one. The problem does not arise in modified models where the second field does not evolve in the first stage of inflation.
On the Problem of Predicting Inflationary Perturbations ; We examine the theoretical foundations of standard methods for computing density perturbations in inflationary models. We find that 1 the timedelay formalism introduced by Guth and Pi, 1982 is only valid when inflation is welldescribed by the de Sitter solution and the equationofstate is nearly unchanging; and, 2 the horizoncrossingBessel approximation extends to nonexponential inflation, but only if the equationofstate is changing slowly. Integration of the gaugeinvariant perturbation equations modebymode is the only method reliable for general models. For models with rapidly varying equationofstate, the correction leads to significantly different predictions for the microwave background anisotropy. An important corollary is that methods proposed for reconstruction of the inflaton potential from anisotropy data are unreliable for general models.
Asymptotic and Exact Solutions of PerfectFluid ScalarTensor Cosmologies ; We present a method which enables exact solutions to be found for at homogeneous and isotropic scalartensor cosmologies with an arbitrary omegaPhi function and satisfying the general perfect fluid state equation Pgamma1rho c2. This method has been used to analyze a wide range of asymptotic analytical solutions at early and late times for different epochs in the cosmic history false vacuum inflationary models, vacuum and radiationdominated models, and matterdominated models. We also describe the qualitative behavior of models at intermediary times and give exact solutions at any time for some particular scalartensor theories.
The Role of Heating and Enrichment in Galaxy Formation ; We show that the winds identified with highredshift lowmass galaxies may strongly affect the formation of stars in more massive galaxies that form later. With 3D realizations of a simple linear growth model we track gas shocking, metal enrichment, and cooling, together with dark halo formation. We show that outflows typically strip baryonic material out of collapsing intermediate mass halos, suppressing star formation. More massive halos can trap the heated gas but collapse late, leading to a broad bimodal redshift distribution, with a larger characteristic mass associated with the lower redshift peak. This scenario accounts for the observed bellshaped luminosity function of early type galaxies, explains the small number of Milky Way satellite galaxies relative to Cold Dark Matter models predictions, and provides a possible explanation for the lack of metal poor Gdwarfs in the solar neighborhood and the more general lack of lowmetallicity stars in massive galaxies relative to closedbox'' models of chemical enrichment. Intergalactic medium heating from outflows should produce spectral distortions in the cosmic microwave background that will be measurable with the next generation of experiments.
From Fractal Cosmography to Fractal Cosmology ; Assuming a fractal distribution of matter in the universe, consequences that follow from the General Theory of Relativity and the Copernican Principle for fractal cosmology are examined. The change in perspective necessary to deal with a fractal universe is highlighted. An ansatz that provides a concrete application of the Conditional Cosmological Principle is provided. This fractal cosmology is obtained by arguments closely following those used in standard cosmology. The resulting model may play a significant role in the debate on whether the universe is a fractal or crosses over to homogeneity at some scale. This model may also be regarded as an idealized fractal model around which more realistic models may be built.
General Relativistic Effects on Magnetar Models of AXPs ; General relativistic bending of light dramatically alters the variability of Xray emission originating from the surfaces of ultramagnetic neutron stars. We construct radiative equilibrium models of such strongly magnetic cooling neutron stars with lightelement atmospheres to compute the angle and energydependent intensity emerging from their surfaces and find that the beaming of surface emission is predominantly nonradial. The combination of this radiation pattern with the calculations of light bending yields pulse amplitudes that vary nonmonotonically with the neutron star compactness and the size of the emitting region. The significant suppression of the pulse amplitude for large emitting areas provides very strong constraints on the mechanisms that can simultaneously produce high periodic variability and Xray luminosity. We apply these results to the thermallyemitting magnetar models of anomalous Xray pulsars AXPs, which are bright slowlyrotating Xray sources with large pulse amplitudes. We use the observed fluxes and pulse amplitudes for all known AXPs and show that thermal emission from two antipodal regions on their surfaces, as predicted by some magnetar models, is inconsistent with these observed properties.
Brane universes tested by supernovae ; We discuss observational constrains coming from supernovae Ia citePerlmutter99 imposed on the behaviour of the RandallSundrum models. In the case of dust matter on the brane, the difference between the bestfit general relativistic model with a Lambdaterm citePerlmutter99 and the bestfit brane models becomes detectable for redshifts z 0.6. It is interesting that brane models predict brighter galaxies for such redshifts which is in agreement with the measurement of the z 1.7 supernova citeRiess01 and with the New Data from the High Z Supernovae Search Team citeschmit02. We also demonstrate that the fit to supernovae data can also be obtained, if we admit the supernegative dark energy p 43 varrho on the brane, where the dark energy in a way mimics the influence of the cosmological constant. It also appears that the dark energy enlarges the age of the universe which is demanded in cosmology. Finally, we propose to check for dark radiation and brane tension by the application of the angular diameter of galaxies minimum value test.
Generalized Chaplygin gas and CMBR constraints ; We study the dependence of the location of the Cosmic Microwave Background Radiation CMBR peaks on the parameters of the generalized Chaplygin gas model, whose equation of state is given by p Arhoalpha, where A is a positive constant and 0 alpha le 1. We find, in particular, that observational data arising from Archeops for the location of the first peak, BOOMERANG for the location of the third peak, supernova and highredshift observations allow constraining significantly the parameter space of the model. Our analysis indicates that the emerging model is clearly distinguishable from the alpha 1 Chaplygin case and the LambdaCDM model.
Galactic Evolution along the Hubble Sequence.I. A grid of models parametrized by initial galaxy mass distribution ; We present a generalization of the multiphase chemical evolution model applied to a wide set of theoretical galaxies with different masses and morphological types. This generalized set of models has been computed using the socalled Universal Rotation Curve from Persic, Salucci Steel 1996 to calculate the radial mass distribution of 44 theoretical protogalaxies. This distribution is a fundamental input which, besides its own effect on the galaxy evolution, defines the characteristic collapse time scale or gas infall rate onto the disk. For each mass radial distribution, we have 10 different evolutionary rates. With these two hypotheses we construct a biparametric grid of models. The results include the time evolution of different regions of the disk and the halo along the galactocentric distance, measured by the gas and stellar masses, the star formation rate and chemical abundances of 15 elements.
WMAP Constraints on the Generalized Chaplygin Gas Model ; The generalized Chaplygin gas GCG model explains the recent accelerated expansion of the Universe via an exotic background fluid whose equation of state is given by pArhoalpha, where A is a positive constant and 0alphale 1. The model is an interesting alternative to scenarios involving scalar field potentials, with the ensuing unnatural fine tuning conditions for the underlying particle physics theories. We derive constraints on the parameter space of the model from bounds on the location of the first few peaks and troughs of the the Cosmic Microwave Background Radiation CMBR power spectrum arising from recent WMAP and BOOMERanG data.
Flat cosmological models with massive scalar field in gauge theories of gravity ; Solutions of gravitational equations of gauge theories of gravity in homogeneous isotropic world with massive scalar field are investigated in the case of flat cosmological models. Special attention is dedicated to general behavior of solutions on contraction stage. It is shown, that on expansion stage inflationary solutions are generic feature of the model. At the same time on contraction stage an effective equation of state is similar to the case of massless scalar field prho at the beginning of evolution and tends to the equation of ultrarelativistic gas prho3. The Hubble parameter tends to some negative value on contraction stage, depending on the mass of the scalar field. Nonsingular solutions in this model are unstable.
A Concordant Freely Coasting Cosmology ; A strictly linear evolution of the cosmological scale factor is surprisingly an excellent fit to a host of cosmological observations. Any model that can support such a coasting presents itself as a falsifiable model as far as classical cosmological tests are concerned. Such evolution is known to be comfortably concordant with the Hubble diagram as deduced from data of recent supernovae 1a and high redshift objects, it passes constraints arising from the age and gravitational lensing statistics and clears basic constraints on nucleosynthesis. Such an evolution exhibits distinguishable and verifiable features for the recombination era. This article discusses the concordance of such an evolution in relation to minimal requirements for large scale structure formation and cosmic microwave background anisotropy along with the overall viability of such models. While these results should be of interest for a host of alternative gravity models that support a linear coasting, we conjecture that a linear evolution would emerge naturally either from General Relativity itself or from a General Relativistic theory of a nonminimally coupled scalar field theory.
Observational Bounds on Cosmic Doomsday ; Recently it was found, in a broad class of models, that the dark energy density may change its sign during the evolution of the universe. This may lead to a global collapse of the universe within the time tc 10101011 years. Our goal is to find what bounds on the future lifetime of the universe can be placed by the next generation of cosmological observations. As an example, we investigate the simplest model of dark energy with a linear potential Vphi V01alphaphi. This model can describe the present stage of acceleration of the universe if alpha is small enough. However, eventually the field phi rolls down, Vphi becomes negative, and the universe collapses. The existing observational data indicate that the universe described by this model will collapse not earlier than tc 10 billion years from the present moment. We show that the data from SNAP and Planck satellites may extend the bound on the doomsday time to tc 40 billion years at the 95 confidence level.
On stability of simplest nonsingular inflationary cosmological models within general relativity and gauge theories of gravity ; In this paper we provide approximate analytical analysis of stability of nonsingular inflationary chaotictype cosmological models. Initial conditions for nonsingular solutions at the bounce correspond to dominance of potential part of the energy density of the scalar field over its kinetic part both within general relativity and gauge theories of gravity. Moreover, scalar field at the bounce exceeds the planckian value and on expansion stage these models correspond to chaotic inflation. Such solutions can be well approximated by explicitly solvable model with constant effective potential cosmological term and massless scalar field during the bounce and on stages of quasiexponential contraction and expansion. Perturbative analysis shows that nonsingular inflationary solutions are exponentially unstable during contraction stage. This result is compared with numerical calculations.
Distribution of Faraday Rotation Measure in Jets from Active Galactic Nuclei I. Prediction from our Sweeping Magnetic Twist Model ; Using the numerical data of MHD simulation for AGN jets based on our sweeping magnetic twist model'', we calculated the Faraday rotation measure FRM and the Stokes parameters to compare with observations. We propose that the FRM distribution can be used to discuss the 3dimensional structure of magnetic field around jets, together with the projected magnetic field derived from the Stokes parameters. In the present paper, we supposed the basic straight part of AGN jet, and used the data of axisymmetric simulation. The FRM distribution we derived has a general tendency to have gradient across the jet axis, which is due to the toroidal component of the helical magnetic field generated by the rotation of the accretion disk. This kind of gradient in the FRM distribution is actually observed in some AGN jets e.g. Asada et al. 2002, which suggests helical magnetic field around the jets and thus supports our MHD model. Following this success, we are now extending our numerical observation to the wiggled part of the jets using the data of 3dimensional simulation based on our model in the following paper.
Relic Gravitational Waves and the Evolution of the Universe ; In the inflation models, the relic gravitational waves RGW generated in the inflation stage, and evolved in the Universe until now. In the different cosmological evolution models, one can get different gravitational waves power spectrum. In this paper, we give a simple formula to estimate this spectrum in a general cosmological model. From this formula, one can easily find the relation between this power spectrum and the cosmological evolution models. The spectrum includes all the information about the evolution of the scale factor a from inflation to now. So RGW is a more clear fossil, which records the cosmological evolution information, and be a useful complementarity for another fossilCosmic Microwave Background Radiation CMB.
The Accretion of Dark Energy onto a Black Hole ; The stationary, spherically symmetric accretion of dark energy onto a Schwarzschild black hole is considered in terms of relativistic hydrodynamics. The approximation of an ideal fluid is used to model the dark energy. General expressions are derived for the accretion rate of an ideal fluid with an arbitrary equation of state pprho onto a black hole. The black hole mass was found to decrease for the accretion of phantom energy. The accretion process is studied in detail for two dark energy models that admit an analytical solution a model with a linear equation of state, palpharhorho0, and a Chaplygin gas. For one of the special cases of a linear equation of state, an analytical expression is derived for the accretion rate of dark energy onto a moving and rotating black hole. The masses of all black holes are shown to approach zero in cosmological models with phantom energy in which the Big Rip scenario is realized.
The Accelerated Acceleration of the Universe ; We present a simple mechanism which can mimic dark energy with an equation of state w 1 as deduced from the supernova data. We imagine that the universe is accelerating under the control of a quintessence field, which is moving up a very gently sloping potential. As a result, the potential energy and hence the acceleration increases at lower redshifts. Fitting this behavior with a dark energy model with constant w would require w1. In fact we find that the choice of parameters which improves the fit to the SNe mimics w 1.4 at low redshifts. Running up the potential in fact provides the best fit to the SN data for a generic quintessence model. However, unlike models with phantoms, our model does not have negative energies or negative norm states. Future searches for supernovae at low redshifts 0.1 z 0.5 and at high redshifts z1 may be a useful probe of our proposal.
Probing Cosmic Acceleration Beyond the Equation of State Distinguishing between Dark Energy and Modified Gravity Models ; If general relativity is the correct theory of physics on large scales, then there is a differential equation that relates the Hubble expansion function, inferred from measurements of angular diameter distance and luminosity distance, to the growth rate of large scale structure. For a dark energy fluid without couplings or an unusual sound speed, deviations from this consistency relationship could be the signature of modified gravity on cosmological scales. We propose a procedure based on this consistency relation in order to distinguish between some dark energy models and modified gravity models. The procedure uses different combinations of cosmological observations and is able to find inconsistencies when present. As an example, we apply the procedure to a universe described by a recently proposed 5dimensional modified gravity model. We show that this leads to an inconsistency within the dark energy parameter space detectable by future experiments.
On the generation of density perturbations at the end of inflation ; Recently a mechanism was proposed whereby the primordial density perturbations are generated at the end of inflation. We continue the analysis of the proposed model of this mechanism and calculate the maximum extent to which the density perturbations produced via this model can dominate over those of the standard inflationary paradigm. In addition, we provide a straightforward variation of this model which allows for greater amplification of the density perturbations. Finally, we show that a variation in the implementation of the original model results in significant nongaussianities in the resulting spectrum of density perturbations. The level of nongaussianities can be made to saturate the current observational bound.
Modified equation of state, scalar field, and bulk viscosity in Friedmann universe ; A generalized dynamical equation for the scale factor of the universe is proposed to describe the cosmological evolution, of which the LambdaCDM model is a special case. It also provides a general example to show the equivalence of the modified equation of state EOS and a scalar field model. In the mathematical aspect, the EOS, the scalar field potential Vphi, and the scale factor at all have possessed analytical solutions. Such features are due to a simple form invariance of the equation inherited which determines the Hubble parameter. From the physical point of view, this dynamical equation can be regarded as the LambdaCDM model with bulk viscosity, an existence content in the universe. We employ the SNe Ia data with the parameter mathcalA measured from the SDSS data and the shift parameter mathcalR measured from WMAP data to constrain the parameters in our model. The result is that the contribution of the bulk viscosity, accumulated as an effective dark energy responsible for the current cosmic accelerating expansion, is made approximately ten percent to that of the cosmological constant.