text
stringlengths
62
2.94k
A Heterogeneous Multiscale Method for Power System Simulation Considering Electromagnetic Transients ; Traditional dynamic security assessment faces challenges as power systems are experiencing a transformation to inverterbasedresource IBR dominated systems, for which electromagnetic transient EMT dynamics have to be considered. However, EMT simulation is timeconsuming especially for a large power grid because the mathematical model based on detailed component modeling is highly stiff and needs to be integrated at tiny time steps due to numerical stability. This paper proposes a heterogeneous multiscale method HMM to address the simulation of a power system considering EMT dynamics as a multiscale problem. The method aims to accurately simulate the macroscopic dynamics of the system even when EMT dynamics are dominating. By force estimation using a kernel function, the proposed method automatically generates a macro model on the fly of simulation based on the micro model of EMT dynamics. It can flexibly switch between the micro and macromodels to capture important EMT dynamics during some time intervals while skipping over other time intervals of less interest to achieve a superior simulation speed. The method is illustrated by a case study on a twomachine EMT model to demonstrate its potential for power system simulation.
Interpretable Spectrum Transformation Attacks to Speaker Recognition ; The success of adversarial attacks to speaker recognition is mainly in whitebox scenarios. When applying the adversarial voices that are generated by attacking whitebox surrogate models to blackbox victim models, i.e. textittransferbased blackbox attacks, the transferability of the adversarial voices is not only far from satisfactory, but also lacks interpretable basis. To address these issues, in this paper, we propose a general framework, named spectral transformation attack based on modified discrete cosine transform STAMDCT, to improve the transferability of the adversarial voices to a blackbox victim model. Specifically, we first apply MDCT to the input voice. Then, we slightly modify the energy of different frequency bands for capturing the salient regions of the adversarial noise in the timefrequency domain that are critical to a successful attack. Unlike existing approaches that operate voices in the time domain, the proposed framework operates voices in the timefrequency domain, which improves the interpretability, transferability, and imperceptibility of the attack. Moreover, it can be implemented with any gradientbased attackers. To utilize the advantage of model ensembling, we not only implement STAMDCT with a single whitebox surrogate model, but also with an ensemble of surrogate models. Finally, we visualize the saliency maps of adversarial voices by the class activation maps CAM, which offers an interpretable basis to transferbased attacks in speaker recognition for the first time. Extensive comparison results with five representative attackers show that the CAM visualization clearly explains the effectiveness of STAMDCT, and the weaknesses of the comparison methods; the proposed method outperforms the comparison methods by a large margin.
Guiding Large Language Models via Directional Stimulus Prompting ; We introduce a novel prompting framework called Directional Stimulus Prompting for guiding blackbox large language models LLMs toward desired outputs. The framework introduces a new component called directional stimulus into the prompt, providing more finegrained guidance and control over LLMs. The directional stimulus serves as hints or cues for each input query to guide LLMs toward the desired output, such as keywords that the desired summary should include for summarization. We utilize a small tunable model e.g., T5 to generate such directional stimulus for each query, allowing us to optimize blackbox LLMs by optimizing a small policy model. This policy model can be trained through 1 supervised finetuning using labeled data and 2 reinforcement learning from offline or online rewards to explore directional stimulus that better aligns LLMs with desired behaviors. We evaluate our framework on summarization and dialogue response generation tasks. Experimental results show that our framework consistently improves ChatGPT's performance over standard prompting with a small collection of training data, and reinforcement learning further improves the performance. Notably, on the MultWOZ dataset, our framework enables ChatGPT to achieve a remarkable 41.4 improvement in its combined score with only 80 dialogues, matching or even surpassing the performance of some fully trained stateoftheart models. We have made our code publicly available.
Modulating Pretrained Diffusion Models for Multimodal Image Synthesis ; We present multimodal conditioning modules MCM for enabling conditional image synthesis using pretrained diffusion models. Previous multimodal synthesis works rely on training networks from scratch or finetuning pretrained networks, both of which are computationally expensive for large, stateoftheart diffusion models. Our method uses pretrained networks but textitdoes not require any updates to the diffusion network's parameters. MCM is a small module trained to modulate the diffusion network's predictions during sampling using 2D modalities e.g., semantic segmentation maps, sketches that were unseen during the original training of the diffusion model. We show that MCM enables user control over the spatial layout of the image and leads to increased control over the image generation process. Training MCM is cheap as it does not require gradients from the original diffusion net, consists of only sim1 of the number of parameters of the base diffusion model, and is trained using only a limited number of training examples. We evaluate our method on unconditional and textconditional models to demonstrate the improved control over the generated images and their alignment with respect to the conditioning inputs.
EvoPrompting Language Models for CodeLevel Neural Architecture Search ; Given the recent impressive accomplishments of language models LMs for code generation, we explore the use of LMs as adaptive mutation and crossover operators for an evolutionary neural architecture search NAS algorithm. While NAS still proves too difficult a task for LMs to succeed at solely through prompting, we find that the combination of evolutionary prompt engineering with soft prompttuning, a method we term EvoPrompting, consistently finds diverse and high performing models. We first demonstrate that EvoPrompting is effective on the computationally efficient MNIST1D dataset, where EvoPrompting produces convolutional architecture variants that outperform both those designed by human experts and naive fewshot prompting in terms of accuracy and model size. We then apply our method to searching for graph neural networks on the CLRS Algorithmic Reasoning Benchmark, where EvoPrompting is able to design novel architectures that outperform current stateoftheart models on 21 out of 30 algorithmic reasoning tasks while maintaining similar model size. EvoPrompting is successful at designing accurate and efficient neural network architectures across a variety of machine learning tasks, while also being general enough for easy adaptation to other tasks beyond neural network design.
Feature Extraction Matters More Universal Deepfake Disruption through Attacking Ensemble Feature Extractors ; Adversarial example is a rising way of protecting facial privacy security from deepfake modification. To prevent massive facial images from being illegally modified by various deepfake models, it is essential to design a universal deepfake disruptor. However, existing works treat deepfake disruption as an EndtoEnd process, ignoring the functional difference between feature extraction and image reconstruction, which makes it difficult to generate a crossmodel universal disruptor. In this work, we propose a novel FeatureOutput ensemble UNiversal Disruptor FOUND against deepfake networks, which explores a new opinion that considers attacking feature extractors as the more critical and general task in deepfake disruption. We conduct an effective twostage disruption process. We first disrupt multimodel feature extractors through multifeature aggregation and individualfeature maintenance, and then develop a gradientensemble algorithm to enhance the disruption effect by simplifying the complex optimization problem of disrupting multiple EndtoEnd models. Extensive experiments demonstrate that FOUND can significantly boost the disruption effect against ensemble deepfake benchmark models. Besides, our method can fast obtain a crossattribute, crossimage, and crossmodel universal deepfake disruptor with only a few training images, surpassing stateoftheart universal disruptors in both success rate and efficiency.
Standard Model in conformal geometry local vs gauged scale invariance ; We discuss comparatively local versus gauged Weyl symmetry beyond Standard Model SM and Einstein gravity and their geometric interpretation. The SM and Einstein gravity admit a natural embedding in Weyl integrable geometry which is a special limit of Weyl conformal nonmetric geometry. The theory has a it local Weyl scale symmetry but no associated gauge boson. Unlike previous models with such symmetry, this embedding is truly minimal i.e. with no additional fields beyond SM and underlying geometry. This theory is compared to a similar minimal embedding of SM and Einstein gravity in Weyl conformal geometry SMW which has a full it gauged scale invariance, with an associated Weyl gauge boson. At large field values, both theories give realistic, StarobinskyHiggs like inflation. The broken phase of the current model is the decoupling limit of the massive Weyl gauge boson of the broken phase of SMW, while the local scale symmetry of the current model is part of the larger gauged scale symmetry of SMW. Hence, the current theory has a gauge embedding in SMW. Unlike in the SMW, we note that in models with local scale symmetry the associated current is trivial, which is a concern for the physical meaning of this symmetry. Therefore, the SMW is a more fundamental UV completion of SM in a full gauge theory of scale invariance that generates Einstein gravity in the spontaneously broken phase, as an effective theory.
Consistent Valid PhysicallyRealizable Adversarial Attack against Crowdflow Prediction Models ; Recent works have shown that deep learning DL models can effectively learn citywide crowdflow patterns, which can be used for more effective urban planning and smart city management. However, DL models have been known to perform poorly on inconspicuous adversarial perturbations. Although many works have studied these adversarial perturbations in general, the adversarial vulnerabilities of deep crowdflow prediction models in particular have remained largely unexplored. In this paper, we perform a rigorous analysis of the adversarial vulnerabilities of DLbased crowdflow prediction models under multiple threat settings, making threefold contributions. 1 We propose CaVdetect by formally identifying two novel properties Consistency and Validity of the crowdflow prediction inputs that enable the detection of standard adversarial inputs with 0 false acceptance rate FAR. 2 We leverage universal adversarial perturbations and an adaptive adversarial loss to present adaptive adversarial attacks to evade CaVdetect defense. 3 We propose CVPR, a Consistent, Valid and PhysicallyRealizable adversarial attack, that explicitly inducts the consistency and validity priors in the perturbation generation mechanism. We find out that although the crowdflow models are vulnerable to adversarial perturbations, it is extremely challenging to simulate these perturbations in physical settings, notably when CaVdetect is in place. We also show that CVPR attack considerably outperforms the adaptively modified standard attacks in FAR and adversarial loss metrics. We conclude with useful insights emerging from our work and highlight promising future research directions.
Reverse Engineering Breast MRIs Predicting Acquisition Parameters Directly from Images ; The image acquisition parameters IAPs used to create MRI scans are central to defining the appearance of the images. Deep learning models trained on data acquired using certain parameters might not generalize well to images acquired with different parameters. Being able to recover such parameters directly from an image could help determine whether a deep learning model is applicable, and could assist with data harmonization andor domain adaptation. Here, we introduce a neural network model that can predict many complex IAPs used to generate an MR image with high accuracy solely using the image, with a single forward pass. These predicted parameters include field strength, echo and repetition times, acquisition matrix, scanner model, scan options, and others. Even challenging parameters such as contrast agent type can be predicted with good accuracy. We perform a variety of experiments and analyses of our model's ability to predict IAPs on many MRI scans of new patients, and demonstrate its usage in a realistic application. Predicting IAPs from the images is an important step toward better understanding the relationship between image appearance and IAPs. This in turn will advance the understanding of many concepts related to the generalizability of neural network models on medical images, including domain shift, domain adaptation, and data harmonization.
GradientRegulated MetaPrompt Learning for Generalizable VisionLanguage Models ; Prompt tuning, a recently emerging paradigm, enables the powerful visionlanguage pretraining models to adapt to downstream tasks in a parameter and data efficient way, by learning the soft prompts'' to condition frozen pretraining models. Though effective, it is particularly problematic in the fewshot scenario, where prompt tuning performance is sensitive to the initialization and requires a timeconsuming process to find a good initialization, thus restricting the fast adaptation ability of the pretraining models. In addition, prompt tuning could undermine the generalizability of the pretraining models, because the learnable prompt tokens are easy to overfit to the limited training samples. To address these issues, we introduce a novel GradientRegulAted Metaprompt learning GRAM framework that jointly metalearns an efficient soft prompt initialization for better adaptation and a lightweight gradient regulating function for strong crossdomain generalizability in a metalearning paradigm using only the unlabeled imagetext pretraining data. Rather than designing a specific prompt tuning method, our GRAM can be easily incorporated into various prompt tuning methods in a modelagnostic way, and comprehensive experiments show that GRAM brings about consistent improvement for them in several settings i.e., fewshot learning, crossdomain generalization, crossdataset generalization, etc. over 11 datasets. Further, experiments show that GRAM enables the orthogonal methods of textual and visual prompt tuning to work in a mutuallyenhanced way, offering better generalizability beyond the unimodal prompt tuning methods.
MRGAN360 Multistage Recurrent Generative Adversarial Network for 360 Degree Image Saliency Prediction ; Thanks to the ability of providing an immersive and interactive experience, the uptake of 360 degree image content has been rapidly growing in consumer and industrial applications. Compared to planar 2D images, saliency prediction for 360 degree images is more challenging due to their high resolutions and spherical viewing ranges. Currently, most highperformance saliency prediction models for omnidirectional images ODIs rely on deeper or broader convolutional neural networks CNNs, which benefit from CNNs' superior feature representation capabilities while suffering from their high computational costs. In this paper, inspired by the human visual cognitive process, i.e., human being's perception of a visual scene is always accomplished by multiple stages of analysis, we propose a novel multistage recurrent generative adversarial networks for ODIs dubbed MRGAN360, to predict the saliency maps stage by stage. At each stage, the prediction model takes as input the original image and the output of the previous stage and outputs a more accurate saliency map. We employ a recurrent neural network among adjacent prediction stages to model their correlations, and exploit a discriminator at the end of each stage to supervise the output saliency map. In addition, we share the weights among all the stages to obtain a lightweight architecture that is computationally cheap. Extensive experiments are conducted to demonstrate that our proposed model outperforms the stateoftheart model in terms of both prediction accuracy and model size.
Reclaiming the Digital Commons A Public Data Trust for Training Data ; Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decisionmaking power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.
OpenVocabulary Object Detection using Pseudo Caption Labels ; Recent openvocabulary detection methods aim to detect novel objects by distilling knowledge from visionlanguage models VLMs trained on a vast amount of imagetext pairs. To improve the effectiveness of these methods, researchers have utilized datasets with a large vocabulary that contains a large number of object classes, under the assumption that such data will enable models to extract comprehensive knowledge on the relationships between various objects and better generalize to unseen object classes. In this study, we argue that more finegrained labels are necessary to extract richer knowledge about novel objects, including object attributes and relationships, in addition to their names. To address this challenge, we propose a simple and effective method named Pseudo Caption Labeling PCL, which utilizes an image captioning model to generate captions that describe object instances from diverse perspectives. The resulting pseudo caption labels offer dense samples for knowledge distillation. On the LVIS benchmark, our best model trained on the deduplicated VisualGenome dataset achieves an AP of 34.5 and an APr of 30.6, comparable to the stateoftheart performance. PCL's simplicity and flexibility are other notable features, as it is a straightforward preprocessing technique that can be used with any image captioning model without imposing any restrictions on model architecture or training process.
Convolutional Neural Networks for the classification of glitches in gravitationalwave data streams ; We investigate the use of Convolutional Neural Networks including the modern ConvNeXt network family to classify transient noise signals i.e.glitches and gravitational waves in data from the Advanced LIGO detectors. First, we use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset and employing transfer learning by finetuning pretrained models in this dataset. Second, we also explore a selfsupervised approach, pretraining models with automatically generated pseudolabels. Our findings are very close to existing results for the same dataset, reaching values for the F1 score of 97.18 94.15 for the best supervised selfsupervised model. We further test the models using actual gravitationalwave signals from LIGOVirgo's O3 run. Although trained using data from previous runs O1 and O2, the models show good performance, in particular when using transfer learning. We find that transfer learning improves the scores without the need for any training on real signals apart from the less than 50 chirp examples from hardware injections present in the Gravity Spy dataset. This motivates the use of transfer learning not only for glitch classification but also for signal classification.
IFSeg Imagefree Semantic Segmentation via VisionLanguage Model ; Visionlanguage VL pretraining has recently gained much attention for its transferability and flexibility in novel concepts e.g., crossmodality transfer across various visual tasks. However, VLdriven segmentation has been underexplored, and the existing approaches still have the burden of acquiring additional training images or even segmentation annotations to adapt a VL model to downstream segmentation tasks. In this paper, we introduce a novel imagefree segmentation task where the goal is to perform semantic segmentation given only a set of the target semantic categories, but without any taskspecific images and annotations. To tackle this challenging task, our proposed method, coined IFSeg, generates VLdriven artificial imagesegmentation pairs and updates a pretrained VL model to a segmentation task. We construct this artificial training data by creating a 2D map of random semantic categories and another map of their corresponding word tokens. Given that a pretrained VL model projects visual and text tokens into a common space where tokens that share the semantics are located closely, this artificially generated word map can replace the real image inputs for such a VL model. Through an extensive set of experiments, our model not only establishes an effective baseline for this novel task but also demonstrates strong performances compared to existing methods that rely on stronger supervision, such as taskspecific images and segmentation masks. Code is available at httpsgithub.comalinlabifseg.
The Invisible Dilaton ; We analyse the dynamics of a light scalar field responsible for the mu term of the Higgs potential and coupled to matter via the Higgsportal mechanism. We find that this dilaton model is stable under radiative corrections induced by the standard model particle masses. When the background value of the scalar field is stabilised at the minimum of the scalar potential, the scalar field fluctuations only couple quadratically to the massive fields of the standard model preventing the scalar direct decay into standard model particles. Cosmologically and prior to the electroweak symmetry breaking, the scalar field rolls down along its effective potential before eventually oscillating and settling down at the electroweak minimum. These oscillations can be at the origin of dark matter due to the initial misalignment of the scalar field compared to the electroweak minimum, and we find that, when the mass of the scalar field is less than the eV scale and acts as a condensate behaving like dark matter on large scales, the scalar particles cannot thermalise with the standard model thermal bath. As matter couples in a compositiondependent manner to the oscillating scalar, this could lead to a violation of the equivalence principle aboard satellites such as the MICROSCOPE experiment and the next generation of tests of the equivalence principle. Local gravitational tests are evaded thanks to the weakness of the quadratic coupling in the dark matter halo, and we find that, around other sources, these dilaton models could be subject to a screening akin to the symmetron mechanism.
Seer Language Instructed Video Prediction with Latent Diffusion Models ; Imagining the future trajectory is the key for robots to make sound planning and successfully reach their goals. Therefore, textconditioned video prediction TVP is an essential task to facilitate general robot policy learning, i.e., predicting future video frames with a given language instruction and reference frames. It is a highly challenging task to ground tasklevel goals specified by instructions and highfidelity frames together, requiring largescale data and computation. To tackle this task and empower robots with the ability to foresee the future, we propose a sample and computationefficient model, named textbfSeer, by inflating the pretrained texttoimage T2I stable diffusion models along the temporal axis. We inflate the denoising UNet and language conditioning model with two novel techniques, Autoregressive SpatialTemporal Attention and Frame Sequential Text Decomposer, to propagate the rich prior knowledge in the pretrained T2I models across the frames. With the welldesigned architecture, Seer makes it possible to generate highfidelity, coherent, and instructionaligned video frames by finetuning a few layers on a small amount of data. The experimental results on Something Something V2 SSv2 and Bridgedata datasets demonstrate our superior video prediction performance with around 210hour training on 4 RTX 3090 GPUs decreasing the FVD of the current SOTA model from 290 to 200 on SSv2 and achieving at least 70 preference in the human evaluation.
AntiDreamBooth Protecting users from personalized texttoimage synthesis ; Texttoimage diffusion models are nothing but a revolution, allowing anyone, even without design skills, to create realistic images from simple text inputs. With powerful personalization tools like DreamBooth, they can generate images of a specific person just by learning from hisher few reference images. However, when misused, such a powerful and convenient tool can produce fake news or disturbing content targeting any individual victim, posing a severe negative social impact. In this paper, we explore a defense system called AntiDreamBooth against such malicious use of DreamBooth. The system aims to add subtle noise perturbation to each user's image before publishing in order to disrupt the generation quality of any DreamBooth model trained on these perturbed images. We investigate a wide range of algorithms for perturbation optimization and extensively evaluate them on two facial datasets over various texttoimage model versions. Despite the complicated formulation of DreamBooth and Diffusionbased texttoimage models, our methods effectively defend users from the malicious use of those models. Their effectiveness withstands even adverse conditions, such as model or promptterm mismatching between training and testing. Our code will be available at hrefhttpsgithub.comVinAIResearchAntiDreamBooth.githttpsgithub.comVinAIResearchAntiDreamBooth.git.
The Impact of Asynchrony on Parallel ModelBased EAs ; In a parallel EA one can strictly adhere to the generational clock, and wait for all evaluations in a generation to be done. However, this idle time limits the throughput of the algorithm and wastes computational resources. Alternatively, an EA can be made asynchronous parallel. However, EAs using classic recombination and selection operators GAs are known to suffer from an evaluation time bias, which also influences the performance of the approach. ModelBased Evolutionary Algorithms MBEAs are more scalable than classic GAs by virtue of capturing the structure of a problem in a model. If this model is learned through linkage learning based on the population, the learned model may also capture biases. Thus, if an asynchronous parallel MBEA is also affected by an evaluation time bias, this could result in learned models to be less suited to solving the problem, reducing performance. Therefore, in this work, we study the impact and presence of evaluation time biases on MBEAs in an asynchronous parallelization setting, and compare this to the biases in GAs. We find that a modern MBEA, GOMEA, is unaffected by evaluation time biases, while the more classical MBEA, ECGA, is affected, much like GAs are.
Estimating Black Hole Spin from AGN SED Fitting The Impact of GeneralRelativistic Ray Tracing ; Accretion disc model fitting to opticalUV quasar spectra requires that the highest mass black holes have the highest spin, with implications on the hierarchical growth of supermassive black holes and their host galaxies over cosmic time. However, these accretion disc models did not include the effects of relativistic ray tracing. Here we show that gravitational redshift cancels out most of the increase in temperature and luminosity from the smaller radii characteristic of high spin. Disc models which include the self consistent general relativistic ray tracing do not fit the UV spectra of the most massive quasars log MModot geq 9.5, most likely showing that the disc structure is very different to that assumed. We extend the relativistic ray tracing on more complex disc models, where the emission is not limited to colour temperature corrected black body radiation but can instead be emitted as warm and hot Comptonisation. We demonstrate this on the broadband UVXray spectrum of Fairall 9, a local intensively monitored 'bare' AGN no significant intrinsic cold or warm absorption. We show that including relativistic corrections does make a difference even to these more complex models, but caution that the inferred black hole spin depends on the assumed nature and geometry of the accretion flow. Additionally, we make our model code publicly available, and name it RELAGN.
Symmetry and topology of hyperbolic Haldane models ; Particles hopping on a twodimensional hyperbolic lattice feature unconventional energy spectra and wave functions that provide a largely uncharted platform for topological phases of matter beyond the Euclidean paradigm. Using realspace topological markers as well as Chern numbers defined in the higherdimensional momentum space of hyperbolic band theory, we construct and investigate hyperbolic Haldane models, which are generalizations of Haldane's honeycomblattice model to various hyperbolic lattices. We present a general framework to characterize pointgroup symmetries in hyperbolic tightbinding models, and use this framework to constrain the multiple first and second Chern numbers in momentum space. We observe several topological gaps characterized by first Chern numbers of value 1 and 2. The momentumspace Chern numbers respect the predicted symmetry constraints and agree with realspace topological markers, indicating a direct connection to observables such as the number of chiral edge modes. With our large repertoire of models, we further demonstrate that the topology of hyperbolic Haldane models is trivialized for lattices with strong negative curvature.
VitaCLIP Video and text adaptive CLIP via Multimodal Prompting ; Adopting contrastive imagetext pretrained models like CLIP towards video classification has gained attention due to its costeffectiveness and competitive performance. However, recent works in this area face a tradeoff. Finetuning the pretrained model to achieve strong supervised performance results in low zeroshot generalization. Similarly, freezing the backbone to retain zeroshot capability causes significant drop in supervised accuracy. Because of this, recent works in literature typically train separate models for supervised and zeroshot action recognition. In this work, we propose a multimodal prompt learning scheme that works to balance the supervised and zeroshot performance under a single unified training. Our prompting approach on the vision side caters for three aspects 1 Global videolevel prompts to model the data distribution; 2 Local framelevel prompts to provide perframe discriminative conditioning; and 3 a summary prompt to extract a condensed video representation. Additionally, we define a prompting scheme on the text side to augment the textual context. Through this prompting scheme, we can achieve stateoftheart zeroshot performance on Kinetics600, HMDB51 and UCF101 while remaining competitive in the supervised setting. By keeping the pretrained backbone frozen, we optimize a much lower number of parameters and retain the existing general representation which helps achieve the strong zeroshot performance. Our codesmodels are released at httpsgithub.comTalalWasimVitaCLIP.
Harnessing the SpatialTemporal Attention of Diffusion Models for HighFidelity TexttoImage Synthesis ; Diffusionbased models have achieved stateoftheart performance on texttoimage synthesis tasks. However, one critical limitation of these models is the low fidelity of generated images with respect to the text description, such as missing objects, mismatched attributes, and mislocated objects. One key reason for such inconsistencies is the inaccurate crossattention to text in both the spatial dimension, which controls at what pixel region an object should appear, and the temporal dimension, which controls how different levels of details are added through the denoising steps. In this paper, we propose a new texttoimage algorithm that adds explicit control over spatialtemporal crossattention in diffusion models. We first utilize a layout predictor to predict the pixel regions for objects mentioned in the text. We then impose spatial attention control by combining the attention over the entire text description and that over the local description of the particular object in the corresponding pixel region of that object. The temporal attention control is further added by allowing the combination weights to change at each denoising step, and the combination weights are optimized to ensure high fidelity between the image and the text. Experiments show that our method generates images with higher fidelity compared to diffusionmodelbased baselines without finetuning the diffusion model. Our code is publicly available at httpsgithub.comUCSBNLPChangDiffusionSpaceTimeAttn.
Echo of Neighbors Privacy Amplification for Personalized Private Federated Learning with Shuffle Model ; Federated Learning, as a popular paradigm for collaborative training, is vulnerable against privacy attacks. Different privacy levels regarding users' attitudes need to be satisfied locally, while a strict privacy guarantee for the global model is also required centrally. Personalized Local Differential Privacy PLDP is suitable for preserving users' varying local privacy, yet only provides a central privacy guarantee equivalent to the worstcase local privacy level. Thus, achieving strong central privacy as well as personalized local privacy with a utilitypromising model is a challenging problem. In this work, a general framework APES is built up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model. To tighten the privacy bound, we quantify the heterogeneous contributions to the central privacy user by user. The contributions are characterized by the ability of generating echos from the perturbation of each user, which is carefully measured by proposed methods Neighbor Divergence and ClipLaplace Mechanism. Furthermore, we propose a refined framework SAPES with the postsparsification technique to reduce privacy loss in highdimension scenarios. To the best of our knowledge, the impact of shuffling on personalized local privacy is considered for the first time. We provide a strong privacy amplification effect, and the bound is tighter than the baseline result based on existing methods for uniform local privacy. Experiments demonstrate that our frameworks ensure comparable or higher accuracy for the global model.
Cosmological constraints from standardized nonCMB observations ; The current expansion of the Universe has been observed to be accelerating, and the widely accepted spatiallyflat concordance model of general relativistic cosmology attributes this phenomenon to a constant dark energy, a cosmological constant, which is measured to comprise about 70 of the total energy budget of the current Universe. However, observational discrepancies and theoretical puzzles have raised questions about this model, suggesting that alternative cosmological models with nonzero spatial curvature andor dark energy dynamics might provide better explanations. To explore these possibilities, we have conducted a series of studies using standardized, lowerredshift observations to constrain six different cosmological models with varying degrees of flatness and dark energy dynamics. Through comparing these observations with theoretical predictions, we aim to deepen our understanding of the evolution of the Universe and shed new light on its mysteries. Our data provide consistent cosmological constraints across all six models, with some suggesting the possibility of mild dark energy dynamics and slight spatial curvature. However, these joint constraints do not rule out the possibility of dark energy being a cosmological constant and the spatial hypersurfaces being flat. Overall, our findings contribute to the ongoing efforts to refine our understanding of the Universe and its properties, and suggest that multiple cosmological models remain viable.
Unifying and Personalizing Weaklysupervised Federated Medical Image Segmentation via Adaptive Representation and Aggregation ; Federated learning FL enables multiple sites to collaboratively train powerful deep models without compromising data privacy and security. The statistical heterogeneity e.g., nonIID data and domain shifts is a primary obstacle in FL, impairing the generalization performance of the global model. Weakly supervised segmentation, which uses sparselygrained i.e., point, bounding box, scribble, blockwise supervision, is increasingly being paid attention to due to its great potential of reducing annotation costs. However, there may exist label heterogeneity, i.e., different annotation forms across sites. In this paper, we propose a novel personalized FL framework for medical image segmentation, named FedICRA, which uniformly leverages heterogeneous weak supervision via adaptIve Contrastive Representation and Aggregation. Concretely, to facilitate personalized modeling and to avoid confusion, a channel selection based site contrastive representation module is employed to adaptively cluster intrasite embeddings and separate intersite ones. To effectively integrate the common knowledge from the global model with the unique knowledge from each local model, an adaptive aggregation module is applied for updating and initializing local models at the element level. Additionally, a weakly supervised objective function that leverages a multiscale tree energy loss and a gated CRF loss is employed to generate more precise pseudolabels and further boost the segmentation performance. Through extensive experiments on two distinct medical image segmentation tasks of different modalities, the proposed FedICRA demonstrates overwhelming performance over other stateoftheart personalized FL methods. Its performance even approaches that of fully supervised training on centralized data. Our code and data are available at httpsgithub.comllmirFedICRA.
PATMAT Person Aware Tuning of MaskAware Transformer for Face Inpainting ; Generative models such as StyleGAN2 and Stable Diffusion have achieved stateoftheart performance in computer vision tasks such as image synthesis, inpainting, and denoising. However, current generative models for face inpainting often fail to preserve fine facial details and the identity of the person, despite creating aesthetically convincing image structures and textures. In this work, we propose Person Aware Tuning PAT of MaskAware Transformer MAT for face inpainting, which addresses this issue. Our proposed method, PATMAT, effectively preserves identity by incorporating reference images of a subject and finetuning a MAT architecture trained on faces. By using 40 reference images, PATMAT creates anchor points in MAT's style module, and tunes the model using the fixed anchors to adapt the model to a new face identity. Moreover, PATMAT's use of multiple images per anchor during training allows the model to use fewer reference images than competing methods. We demonstrate that PATMAT outperforms stateoftheart models in terms of image quality, the preservation of personspecific details, and the identity of the subject. Our results suggest that PATMAT can be a promising approach for improving the quality of personalized face inpainting.
Contact Models in Robotics a Comparative Analysis ; Physics simulation is ubiquitous in robotics. Whether in modelbased approaches e.g., trajectory optimization, or modelfree algorithms e.g., reinforcement learning, physics simulators are a central component of modern control pipelines in robotics. Over the past decades, several robotic simulators have been developed, each with dedicated contact modeling assumptions and algorithmic solutions. In this article, we survey the main contact models and the associated numerical methods commonly used in robotics for simulating advanced robot motions involving contact interactions. In particular, we recall the physical laws underlying contacts and friction i.e., Signorini condition, Coulomb's law, and the maximum dissipation principle, and how they are transcribed in current simulators. For each physics engine, we expose their inherent physical relaxations along with their limitations due to the numerical techniques employed. Based on our study, we propose theoretically grounded quantitative criteria on which we build benchmarks assessing both the physical and computational aspects of simulation. We support our work with an opensource and efficient C implementation of the existing algorithmic variations. Our results demonstrate that some approximations or algorithms commonly used in robotics can severely widen the reality gap and impact target applications. We hope this work will help motivate the development of new contact models, contact solvers, and robotic simulators in general, at the root of recent progress in motion generation in robotics.
Assisting clinical practice with fuzzy probabilistic decision trees ; The need for fully humanunderstandable models is increasingly being recognised as a central theme in AI research. The acceptance of AI models to assist in decision making in sensitive domains will grow when these models are interpretable, and this trend towards interpretable models will be amplified by upcoming regulations. One of the killer applications of interpretable AI is medical practice, which can benefit from accurate decision support methodologies that inherently generate trust. In this work, we propose FPT, MedFP, a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice. This approach is fully interpretable as it allows clinicians to generate, control and verify the entire diagnosis procedure; one of the methodology's strength is the capability to decrease the frequency of misdiagnoses by providing an estimate of uncertainties and counterfactuals. Our approach is applied as a proofofconcept to two real medical scenarios classifying malignant thyroid nodules and predicting the risk of progression in chronic kidney disease patients. Our results show that probabilistic fuzzy decision trees can provide interpretable support to clinicians, furthermore, introducing fuzzy variables into the probabilistic model brings significant nuances that are lost when using the crisp thresholds set by traditional probabilistic decision trees. We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a userfriendly interface specifically designed for this purpose. Moreover, we discuss the interpretability of the FPT model.
Quantum information criteria for model selection in quantum state estimation ; Quantum state estimation or state tomography is an indispensable task in quantum information processing. Because full state tomography that determines all elements of the density matrix is computationally demanding, one usually takes the strategy of assuming a certain model of quantum states and identifying the model parameters. However, it is difficult to make a valid assumption given little prior knowledge on a quantum state of interest, and thus we need a reasonable model selection method for quantum state estimation. Actually, in the classical statistical estimation theory, several types of information criteria have been established and widely used in practice for appropriately choosing a classical statistical model. In this study, we propose quantum information criteria for evaluating the quality of the estimated quantum state in terms of the quantum relative entropy, which is a natural quantum analogue of the classical information criterion defined in terms of KullbackLeibler divergence. In particular, we derive two quantum information criteria depending on the type of estimator for the quantum relative entropy; one uses the loglikelihood and the other uses the classical shadow. The general role of information criteria is to predict the performance of an estimated model for unseen data, although it is a function of only sampled data; this generalization capability of the proposed quantum information criteria is evaluated in numerical simulations.
Improved Online Scheduling of Moldable Task Graphs under Common Speedup Models ; We consider the online scheduling problem of moldable task graphs on multiprocessor systems for minimizing the overall completion time or makespan. Moldable job scheduling has been widely studied in the literature, in particular when tasks have dependencies i.e., task graphs or when tasks are released onthefly i.e., online. However, few studies have focused on both i.e., online scheduling of moldable task graphs. In this paper, we design a new online scheduling algorithm for this problem and derive constant competitive ratios under several common yet realistic speedup models i.e., roofline, communication, Amdahl, and a general combination. These results improve the ones we have shown in the preliminary version of this paper. We also prove, for each speedup model, a lower bound on the competitiveness of any online list scheduling algorithm that allocates processors to a task based only on the task's parameters and not on its position in the graph. This lower bound matches exactly the competitive ratio of our algorithm for the roofline, communication and Amdahl's model, and is close to the ratio for the general model. Finally, we provide a lower bound on the competitive ratio of any deterministic online algorithm for the arbitrary speedup model, which is not constant but depends on the number of tasks in the longest path of the graph.
Learning Neural Constitutive Laws From Motion Observations for Generalizable PDE Dynamics ; We propose a hybrid neural network NN and PDE approach for learning generalizable PDE dynamics from motion observations. Many NN approaches learn an endtoend model that implicitly models both the governing PDE and constitutive models or material models. Without explicit PDE knowledge, these approaches cannot guarantee physical correctness and have limited generalizability. We argue that the governing PDEs are often wellknown and should be explicitly enforced rather than learned. Instead, constitutive models are particularly suitable for learning due to their datafitting nature. To this end, we introduce a new framework termed Neural Constitutive Laws NCLaw, which utilizes a network architecture that strictly guarantees standard constitutive priors, including rotation equivariance and undeformed state equilibrium. We embed this network inside a differentiable simulation and train the model by minimizing a loss function based on the difference between the simulation and the motion observation. We validate NCLaw on various largedeformation dynamical systems, ranging from solids to fluids. After training on a single motion trajectory, our method generalizes to new geometries, initialboundary conditions, temporal ranges, and even multiphysics systems. On these extremely outofdistribution generalization tasks, NCLaw is ordersofmagnitude more accurate than previous NN approaches. Realworld experiments demonstrate our method's ability to learn constitutive laws from videos.
Deep statespace modeling for explainable representation, analysis, and generation of professional human poses ; The analysis of human movements has been extensively studied due to its wide variety of practical applications, such as humanrobot interaction, human learning applications, or clinical diagnosis. Nevertheless, the stateoftheart still faces scientific challenges when modeling human movements. To begin, new models must account for the stochasticity of human movement and the physical structure of the human body in order to accurately predict the evolution of fullbody motion descriptors over time. Second, while utilizing deep learning algorithms, their explainability in terms of body posture predictions needs to be improved as they lack comprehensible representations of human movement. This paper addresses these challenges by introducing three novel methods for creating explainable representations of human movement. In this study, human body movement is formulated as a statespace model adhering to the structure of the Gesture Operational Model GOM, whose parameters are estimated through the application of deep learning and statistical algorithms. The trained models are used for the fullbody dexterity analysis of expert professionals, in which dynamic associations between body joints are identified, and for generating artificially professional movements.
Scanpath Prediction in Panoramic Videos via Expected Code Length Minimization ; Predicting human scanpaths when exploring panoramic videos is a challenging task due to the spherical geometry and the multimodality of the input, and the inherent uncertainty and diversity of the output. Most previous methods fail to give a complete treatment of these characteristics, and thus are prone to errors. In this paper, we present a simple new criterion for scanpath prediction based on principles from lossy data compression. This criterion suggests minimizing the expected code length of quantized scanpaths in a training set, which corresponds to fitting a discrete conditional probability model via maximum likelihood. Specifically, the probability model is conditioned on two modalities a viewport sequence as the deformationreduced visual input and a set of relative historical scanpaths projected onto respective viewports as the aligned path input. The probability model is parameterized by a product of discretized Gaussian mixture models to capture the uncertainty and the diversity of scanpaths from different users. Most importantly, the training of the probability model does not rely on the specification of groundtruth scanpaths for imitation learning. We also introduce a proportionalintegralderivative PID controllerbased sampler to generate realistic humanlike scanpaths from the learned probability model. Experimental results demonstrate that our method consistently produces better quantitative scanpath results in terms of prediction accuracy by comparing to the assumed groundtruths and perceptual realism through machine discrimination over a wide range of prediction horizons. We additionally verify the perceptual realism improvement via a formal psychophysical experiment and the generalization improvement on several unseen panoramic video datasets.
Web Content Filtering through knowledge distillation of Large Language Models ; We introduce a stateoftheart approach for URL categorization that leverages the power of Large Language Models LLMs to address the primary objectives of web content filtering safeguarding organizations from legal and ethical risks, limiting access to highrisk or suspicious websites, and fostering a secure and professional work environment. Our method utilizes LLMs to generate accurate classifications and then employs established knowledge distillation techniques to create smaller, more specialized student models tailored for web content filtering. Distillation results in a student model with a 9 accuracy rate improvement in classifying websites, sourced from customer telemetry data collected by a large security vendor, into 30 distinct content categories based on their URLs, surpassing the current stateoftheart approach. Our student model matches the performance of the teacher LLM with 175 times less parameters, allowing the model to be used for inline scanning of large volumes of URLs, and requires 3 orders of magnitude less manually labeled training data than the current stateoftheart approach. Depending on the specific use case, the output generated by our approach can either be directly returned or employed as a prefilter for more resourceintensive operations involving website images or HTML.
Sequential Experimental Design for Spectral Measurement Active Learning Using a Parametric Model ; In this study, we demonstrate a sequential experimental design for spectral measurements by active learning using parametric models as predictors. In spectral measurements, it is necessary to reduce the measurement time because of sample fragility and high energy costs. To improve the efficiency of experiments, sequential experimental designs are proposed, in which the subsequent measurement is designed by active learning using the data obtained before the measurement. Conventionally, parametric models are employed in data analysis; when employed for active learning, they are expected to afford a sequential experimental design that improves the accuracy of data analysis. However, due to the complexity of the formulas, a sequential experimental design using general parametric models has not been realized. Therefore, we applied Bayesian inferencebased data analysis using the exchange Monte Carlo method to realize a sequential experimental design with general parametric models. In this study, we evaluated the effectiveness of the proposed method by applying it to Bayesian spectral deconvolution and Bayesian Hamiltonian selection in Xray photoelectron spectroscopy. Using numerical experiments with artificial data, we demonstrated that the proposed method improves the accuracy of model selection and parameter estimation while reducing the measurement time compared with the results achieved without active learning or with active learning using the Gaussian process regression.
Efficient Computation of HighDimensional Penalized Generalized Linear Mixed Models by Latent Factor Modeling of the Random Effects ; Modern biomedical datasets are increasingly high dimensional and exhibit complex correlation structures. Generalized Linear Mixed Models GLMMs have long been employed to account for such dependencies. However, proper specification of the fixed and random effects in GLMMs is increasingly difficult in high dimensions, and computational complexity grows with increasing dimension of the random effects. We present a novel reformulation of the GLMM using a factor model decomposition of the random effects, enabling scalable computation of GLMMs in high dimensions by reducing the latent space from a large number of random effects to a smaller set of latent factors. We also extend our prior work to estimate model parameters using a modified Monte Carlo Expectation Conditional Minimization algorithm, allowing us to perform variable selection on both the fixed and random effects simultaneously. We show through simulation that through this factor model decomposition, our method can fit high dimensional penalized GLMMs faster than comparable methods and more easily scale to larger dimensions not previously seen in existing approaches.
Probabilistic forecast of nonlinear dynamical systems with uncertainty quantification ; Datadriven modeling is useful for reconstructing nonlinear dynamical systems when the underlying process is unknown or too expensive to compute. Having reliable uncertainty assessment of the forecast enables tools to be deployed to predict new scenarios unobserved before. In this work, we first extend parallel partial Gaussian processes for predicting the vectorvalued transition function that links the observations between the current and next time points, and quantify the uncertainty of predictions by posterior sampling. Second, we show the equivalence between the dynamic mode decomposition and the maximum likelihood estimator of the linear mapping matrix in the linear state space model. The connection provides a data generating model of dynamic mode decomposition and thus, uncertainty of predictions can be obtained. Furthermore, we draw close connections between different datadriven models for approximating nonlinear dynamics, through a unified view of data generating models. We study two numerical examples, where the inputs of the dynamics are assumed to be known in the first example and the inputs are unknown in the second example. The examples indicate that uncertainty of forecast can be properly quantified, whereas model or input misspecification can degrade the accuracy of uncertainty quantification.
A Comparative Study of Methods for Estimating Conditional Shapley Values and When to Use Them ; Shapley values originated in cooperative game theory but are extensively used today as a modelagnostic explanation framework to explain predictions made by complex machine learning models in the industry and academia. There are several algorithmic approaches for computing different versions of Shapley value explanations. Here, we focus on conditional Shapley values for predictive models fitted to tabular data. Estimating precise conditional Shapley values is difficult as they require the estimation of nontrivial conditional expectations. In this article, we develop new methods, extend earlier proposed approaches, and systematize the new refined and existing methods into different method classes for comparison and evaluation. The method classes use either Monte Carlo integration or regression to model the conditional expectations. We conduct extensive simulation studies to evaluate how precisely the different method classes estimate the conditional expectations, and thereby the conditional Shapley values, for different setups. We also apply the methods to several realworld data experiments and provide recommendations for when to use the different method classes and approaches. Roughly speaking, we recommend using parametric methods when we can specify the data distribution almost correctly, as they generally produce the most accurate Shapley value explanations. When the distribution is unknown, both generative methods and regression models with a similar form as the underlying predictive model are good and stable options. Regressionbased methods are often slow to train but produce the Shapley value explanations quickly once trained. The vice versa is true for Monte Carlobased methods, making the different methods appropriate in different practical situations.
Learning the Visualness of Text Using Large VisionLanguage Models ; Visual text evokes an image in a person's mind, while nonvisual text fails to do so. A method to automatically detect visualness in text will unlock the ability to augment text with relevant images, as neural texttoimage generation and retrieval models operate on the implicit assumption that the input text is visual in nature. We curate a dataset of 3,620 English sentences and their visualness scores provided by multiple human annotators. Additionally, we use documents that contain text and visual assets to create a distantly supervised corpus of document text and associated images. We also propose a finetuning strategy that adapts large visionlanguage models like CLIP that assume a onetoone correspondence between text and image to the task of scoring text visualness from text input alone. Our strategy involves modifying the model's contrastive learning objective to map text identified as nonvisual to a common NULL image while matching visual text to their corresponding images in the document. We evaluate the proposed approach on its ability to i classify visual and nonvisual text accurately, and ii attend over words that are identified as visual in psycholinguistic studies. Empirical evaluation indicates that our approach performs better than several heuristics and baseline models for the proposed task. Furthermore, to highlight the importance of modeling the visualness of text, we conduct qualitative analyses of texttoimage generation systems like DALLE.
LIMA Less Is More for Alignment ; Large language models are trained in two stages 1 unsupervised pretraining from raw text, to learn generalpurpose representations, and 2 large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model finetuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT4 in 43 of cases; this statistic is as high as 58 when compared to Bard and 65 versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.
DiffUCDUnsupervised Hyperspectral Image Change Detection with Semantic Correlation Diffusion Model ; Hyperspectral image change detection HSICD has emerged as a crucial research area in remote sensing due to its ability to detect subtle changes on the earth's surface. Recently, diffusional denoising probabilistic models DDPM have demonstrated remarkable performance in the generative domain. Apart from their image generation capability, the denoising process in diffusion models can comprehensively account for the semantic correlation of spectralspatial features in HSI, resulting in the retrieval of semantically relevant features in the original image. In this work, we extend the diffusion model's application to the HSICD field and propose a novel unsupervised HSICD with semantic correlation diffusion model DiffUCD. Specifically, the semantic correlation diffusion model SCDM leverages abundant unlabeled samples and fully accounts for the semantic correlation of spectralspatial features, which mitigates pseudo change between multitemporal images arising from inconsistent imaging conditions. Besides, objects with the same semantic concept at the same spatial location may exhibit inconsistent spectral signatures at different times, resulting in pseudo change. To address this problem, we propose a crosstemporal contrastive learning CTCL mechanism that aligns the spectral feature representations of unchanged samples. By doing so, the spectral difference invariant features caused by environmental changes can be obtained. Experiments conducted on three publicly available datasets demonstrate that the proposed method outperforms the other stateoftheart unsupervised methods in terms of Overall Accuracy OA, Kappa Coefficient KC, and F1 scores, achieving improvements of approximately 3.95, 8.13, and 4.45, respectively. Notably, our method can achieve comparable results to those fully supervised methods requiring numerous annotated samples.
VLAB Enhancing Video Language Pretraining by Feature Adapting and Blending ; Largescale imagetext contrastive pretraining models, such as CLIP, have been demonstrated to effectively learn highquality multimodal representations. However, there is limited research on learning videotext representations for general video multimodal tasks based on these powerful features. Towards this goal, we propose a novel videotext pretraining method dubbed VLAB Video Language pretraining by feature Adapting and Blending, which transfers CLIP representations to video pretraining tasks and develops unified video multimodal models for a wide range of videotext tasks. Specifically, VLAB is founded on two key strategies feature adapting and feature blending. In the former, we introduce a new video adapter module to address CLIP's deficiency in modeling temporal information and extend the model's capability to encompass both contrastive and generative tasks. In the latter, we propose an endtoend training method that further enhances the model's performance by exploiting the complementarity of image and video features. We validate the effectiveness and versatility of VLAB through extensive experiments on highly competitive video multimodal tasks, including video text retrieval, video captioning, and video question answering. Remarkably, VLAB outperforms competing methods significantly and sets new records in video question answering on MSRVTT, MSVD, and TGIF datasets. It achieves an accuracy of 49.6, 61.0, and 79.0, respectively. Codes and models will be released.
KnowledgeAugmented Reasoning Distillation for Small Language Models in KnowledgeIntensive Tasks ; Large Language Models LLMs have shown promising performance in knowledgeintensive reasoning tasks that require a compound understanding of knowledge. However, deployment of the LLMs in realworld applications can be challenging due to their high computational requirements and concerns on data privacy. Previous studies have focused on building taskspecific small language models LMs by finetuning them with labeled data or distilling LLMs. However, these approaches are illsuited for knowledgeintensive reasoning tasks due to the limited capacity of small LMs in memorizing the knowledge required. Motivated by our theoretical analysis on memorization, we propose KnowledgeAugmented Reasoning Distillation KARD, a novel method that finetunes small LMs to generate rationales with augmented knowledge retrieved from an external knowledge base. Moreover, we further propose a neural reranker to obtain documents relevant to rationale generation. We empirically show that KARD significantly improves the performance of small T5 and FlanT5 models on the challenging knowledgeintensive reasoning datasets, namely MedQAUSMLE and StrategyQA. Notably, our method makes the 250M models achieve superior performance against the finetuned 3B models, having 12 times larger parameters, on both MedQAUSMLE and StrategyQA benchmarks.
Understanding Breast Cancer Survival Using Causality and Language Models on Multiomics Data ; The need for more usable and explainable machine learning models in healthcare increases the importance of developing and utilizing causal discovery algorithms, which aim to discover causal relations by analyzing observational data. Explainable approaches aid clinicians and biologists in predicting the prognosis of diseases and suggesting proper treatments. However, very little research has been conducted at the crossroads between causal discovery, genomics, and breast cancer, and we aim to bridge this gap. Moreover, evaluation of causal discovery methods on real data is in general notoriously difficult because groundtruth causal relations are usually unknown, and accordingly, in this paper, we also propose to address the evaluation problem with large language models. In particular, we exploit suitable causal discovery algorithms to investigate how various perturbations in the genome can affect the survival of patients diagnosed with breast cancer. We used three main causal discovery algorithms PC, Greedy Equivalence Search GES, and a Generalized Precision Matrixbased one. We experiment with a subset of The Cancer Genome Atlas, which contains information about mutations, copy number variations, protein levels, and gene expressions for 705 breast cancer patients. Our findings reveal important factors related to the vital status of patients using causal discovery algorithms. However, the reliability of these results remains a concern in the medical domain. Accordingly, as another contribution of the work, the results are validated through language models trained on biomedical literature, such as BlueBERT and other large language models trained on medical corpora. Our results profess proper utilization of causal discovery algorithms and language models for revealing reliable causal relations for clinical applications.
Daynight transport induced chemistry and clouds on WASP39b I Gasphase composition ; JWST has recently detected the first robust photochemical product on an exoplanet sulfur dioxide SO2 on WASP39b Rustamkulov et al. 2023; Alderson et al. 2023; Tsai et al. 2023b. The data from the NIRISS instrument also reveal signs of partial coverage of clouds Feinstein et al. 2023. Most of the previous studies have focused on interpreting spectral data with 1D models. To explore how the chemical species and cloud particles are altered by global circulation, we applied a 2D photochemical model and a 2D microphysical cloud model separately to postprocess the thermal and dynamical structures simulated by a 3D general circulation model GCM of WASP39b. We found that SO2 produced by photochemistry on the dayside can be transported to the nightside owing to the efficient replenishment of horizontal transport. The morningevening limb differences in methane CH4 abundances predicted by the 1D models disappeared after horizontal transport is included. Similarly, the inclusion of horizontal transport also reduced the limb differences in SO2. Our modeling results suggest that the fast zonal wind results in minimal or negligible limb asymmetry in composition. Based on the synthetic spectra generated by our 2D atmosphere simulations, we propose that observing SO2 absorption in the emission spectra of WASP39b at different phases may offer opportunities to probe the horizontal quenching process of photochemical products. We will focus on the gasphase chemistry in this paper and leave the results regarding clouds in the subsequent paper as part of the series.
CrossDomain Car Detection Model with Integrated Convolutional Block Attention Mechanism ; Car detection, particularly through camera vision, has become a major focus in the field of computer vision and has gained widespread adoption. While current car detection systems are capable of good detection, reliable detection can still be challenging due to factors such as proximity between the car, light intensity, and environmental visibility. To address these issues, we propose crossdomain Car Detection Model with integrated convolutional block Attention mechanismCDMA that we apply to car recognition for autonomous driving and other areas. CDMA includes several novelties 1Building a complete crossdomain target detection framework. 2Developing an unpaired target domain picture generation module with an integrated convolutional attention mechanism which specifically emphasizes the car headlights feature. 3Adopting Generalized Intersection over Union GIOU as the loss function of the target detection framework. 4Designing an object detection model integrated with twoheaded Convolutional Block Attention ModuleCBAM. 5Utilizing an effective data enhancement method. To evaluate the model's effectiveness, we performed a reduced will resolution process on the data in the SSLAD dataset and used it as the benchmark dataset for our task. Experimental results show that the performance of the crossdomain car target detection model improves by 40 over the model without our framework, and our improvements have a significant impact on crossdomain car recognition.
TransAct Transformerbased Realtime User Action Model for Recommendation at Pinterest ; Sequential models that encode user activity for next action prediction have become a popular design choice for building webscale personalized recommendation systems. Traditional methods of sequential recommendation either utilize endtoend learning on realtime user actions, or learn user representations separately in an offline batchgenerated manner. This paper 1 presents Pinterest's ranking architecture for Homefeed, our personalized recommendation product and the largest engagement surface; 2 proposes TransAct, a sequential model that extracts users' shortterm preferences from their realtime activities; 3 describes our hybrid approach to ranking, which combines endtoend sequential modeling via TransAct with batchgenerated user embeddings. The hybrid approach allows us to combine the advantages of responsiveness from learning directly on realtime user activity with the costeffectiveness of batch user representations learned over a longer time period. We describe the results of ablation studies, the challenges we faced during productionization, and the outcome of an online AB experiment, which validates the effectiveness of our hybrid ranking model. We further demonstrate the effectiveness of TransAct on other surfaces such as contextual recommendations and search. Our model has been deployed to production in Homefeed, Related Pins, Notifications, and Search at Pinterest.
General SIS diffusion process with indirect spreading pathways on a hypergraph ; While conventional graphs only characterize pairwise interactions, higherorder networks hypergraph, simplicial complex capture multibody interactions, which is a potentially more suitable modeling framework for a complex real system. However, the introduction of higherorder interactions brings new challenges for the rigorous analysis of such systems on a higherorder network. In this paper, we study a series of SIStype diffusion processes with both indirect and direct pathways on a directed hypergraph. In a concrete case, the model we propose is based on a specific choice polynomial of interaction function how several agents influence each other when they are in a hyperedge. Then, by the same choice of interaction function, we further extend the system and propose a bivirus competing model on a directed hypergraph by coupling two singlevirus models together. Finally, the most general model in this paper considers an abstract interaction function under singlevirus and bivirus settings. For the singlevirus model, we provide the results regarding healthy state and endemic equilibrium. For the bivirus setting, we further give an analysis of the existence and stability of the healthy state, dominant endemic equilibria, and coexisting equilibria. All theoretical results are finally supported by some numerical examples.
STEVE1 A Generative Model for TexttoBehavior in Minecraft ; Constructing AI models that respond to text instructions is challenging, especially for sequential decisionmaking tasks. This work introduces an instructiontuned Video Pretraining VPT model for Minecraft called STEVE1, demonstrating that the unCLIP approach, utilized in DALLE 2, is also effective for creating instructionfollowing sequential decisionmaking agents. STEVE1 is trained in two steps adapting the pretrained VPT model to follow commands in MineCLIP's latent space, then training a prior to predict latent codes from text. This allows us to finetune VPT through selfsupervised behavioral cloning and hindsight relabeling, bypassing the need for costly human text annotations. By leveraging pretrained models like VPT and MineCLIP and employing best practices from textconditioned image generation, STEVE1 costs just 60 to train and can follow a wide range of shorthorizon openended text and visual instructions in Minecraft. STEVE1 sets a new bar for openended instruction following in Minecraft with lowlevel controls mouse and keyboard and raw pixel inputs, far outperforming previous baselines. We provide experimental evidence highlighting key factors for downstream performance, including pretraining, classifierfree guidance, and data scaling. All resources, including our model weights, training scripts, and evaluation tools are made available for further research.
PeFLL A Lifelong Learning Approach to Personalized Federated Learning ; Personalized federated learning pFL has emerged as a popular approach to dealing with the challenge of statistical heterogeneity between the data distributions of the participating clients. Instead of learning a single global model, pFL aims to learn an individual model for each client while still making use of the data available at other clients. In this work, we present PeFLL, a new pFL approach rooted in lifelong learning that performs well not only on clients present during its training phase, but also on any that may emerge in the future. PeFLL learns to output client specific models by jointly training an embedding network and a hypernetwork. The embedding network learns to represent clients in a latent descriptor space in a way that reflects their similarity to each other. The hypernetwork learns a mapping from this latent space to the space of possible client models. We demonstrate experimentally that PeFLL produces models of superior accuracy compared to previous methods, especially for clients not seen during training, and that it scales well to large numbers of clients. Moreover, generating a personalized model for a new client is efficient as no additional finetuning or optimization is required by either the client or the server. We also present theoretical results supporting PeFLL in the form of a new PACBayesian generalization bound for lifelong learning and we prove the convergence of our proposed optimization procedure.
SingleStage Visual Relationship Learning using Conditional Queries ; Research in scene graph generation SGG usually considers twostage models, that is, detecting a set of entities, followed by combining them and labeling all possible relationships. While showing promising results, the pipeline structure induces large parameter and computation overhead, and typically hinders endtoend optimizations. To address this, recent research attempts to train singlestage models that are computationally efficient. With the advent of DETR, a set based detection model, onestage models attempt to predict a set of subjectpredicateobject triplets directly in a single shot. However, SGG is inherently a multitask learning problem that requires modeling entity and predicate distributions simultaneously. In this paper, we propose Transformers with conditional queries for SGG, namely, TraCQ with a new formulation for SGG that avoids the multitask learning problem and the combinatorial entity pair distribution. We employ a DETRbased encoderdecoder design and leverage conditional queries to significantly reduce the entity label space as well, which leads to 20 fewer parameters compared to stateoftheart singlestage models. Experimental results show that TraCQ not only outperforms existing singlestage scene graph generation methods, it also beats many stateoftheart twostage methods on the Visual Genome dataset, yet is capable of endtoend training and faster inference.
SqueezeLLM DenseandSparse Quantization ; Generative Large Language Models LLMs have demonstrated remarkable results for a wide range of tasks. However, deploying these models for inference has been a significant challenge due to their unprecedented resource requirements. This has forced existing deployment frameworks to use multiGPU inference pipelines, which are often complex and costly, or to use smaller and less performant models. In this work, we demonstrate that the main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, specifically for single batch inference. While quantization has emerged as a promising solution by representing model weights with reduced precision, previous efforts have often resulted in notable performance degradation. To address this, we introduce SqueezeLLM, a posttraining quantization framework that not only enables lossless compression to ultralow precisions of up to 3bit, but also achieves higher quantization performance under the same memory constraint. Our framework incorporates two novel ideas i sensitivitybased nonuniform quantization, which searches for the optimal bit precision assignment based on secondorder information; and ii the DenseandSparse decomposition that stores outliers and sensitive weight values in an efficient sparse format. When applied to the LLaMA models, our 3bit quantization significantly reduces the perplexity gap from the FP16 baseline by up to 2.1x as compared to the stateoftheart methods with the same memory requirement. Furthermore, when deployed on an A6000 GPU, our quantized models achieve up to 2.3x speedup compared to the baseline. Our code is opensourced and available online.
PUGAN Physical ModelGuided Underwater Image Enhancement Using GAN with DualDiscriminators ; Due to the light absorption and scattering induced by the water medium, underwater images usually suffer from some degradation problems, such as low contrast, color distortion, and blurring details, which aggravate the difficulty of downstream underwater understanding tasks. Therefore, how to obtain clear and visually pleasant images has become a common concern of people, and the task of underwater image enhancement UIE has also emerged as the times require. Among existing UIE methods, Generative Adversarial Networks GANs based methods perform well in visual aesthetics, while the physical modelbased methods have better scene adaptability. Inheriting the advantages of the above two types of models, we propose a physical modelguided GAN model for UIE in this paper, referred to as PUGAN. The entire network is under the GAN architecture. On the one hand, we design a Parameters Estimation subnetwork Parsubnet to learn the parameters for physical model inversion, and use the generated color enhancement image as auxiliary information for the TwoStream Interaction Enhancement subnetwork TSIEsubnet. Meanwhile, we design a Degradation Quantization DQ module in TSIEsubnet to quantize scene degradation, thereby achieving reinforcing enhancement of key regions. On the other hand, we design the DualDiscriminators for the stylecontent adversarial constraint, promoting the authenticity and visual aesthetics of the results. Extensive experiments on three benchmark datasets demonstrate that our PUGAN outperforms stateoftheart methods in both qualitative and quantitative metrics.
Brane Nucleation in Supersymmetric Models ; This paper explores the process of vacuum decay in supersymmetric models related to flux compactifications. In particular, we describe these instabilities within supersymmetric Lagrangians for a single threeform multiplet. This multiplet combines scalar fields, representing the moduli fields in four dimensions, with 3form fields that influence the potential for these moduli via the integer flux of their associated 4form field strength. Furthermore, using supersymmetry as a guide we obtain the form of the couplings of these fields to the membranes that act as sources to the 3form potentials. Adding small supersymmetry breaking terms to these Lagrangians one can obtain instanton solutions describing the decay of the vacua in these models by the formation of a membrane bubble. These instantons combine the usual Colemande Luccia and the BrownTeitelboim formalisms in a single unified model. We study simple numerical examples of theories with and without gravity in this new framework and generalize known Euclidean methods to accomodate the simulataneous inclusion of scalar fields and charged membranes to these instanton solutions. Moreover, we show explicitly in these examples how one recovers the static supersymmetric solutions in the limiting case where the supersymmetry breaking terms vanish. In this limit, the bubble becomes infinite and flat and represents a hybrid between the usual supersymmetric domain walls of field theory models and the brane solutions interpolating between the supersymmetric vacua; a sort of dressed supermembrane BPS solution. Finally, we briefly comment on the implications of these solutions in cosmological models based on the String Theory Landscape where these type of 4d effective theories could be relevant in inflationary scenarios.
SemiSupervised Learning for MultiLabel Cardiovascular Diseases PredictionA MultiDataset Study ; Electrocardiography ECG is a noninvasive tool for predicting cardiovascular diseases CVDs. Current ECGbased diagnosis systems show promising performance owing to the rapid development of deep learning techniques. However, the label scarcity problem, the cooccurrence of multiple CVDs and the poor performance on unseen datasets greatly hinder the widespread application of deep learningbased models. Addressing them in a unified framework remains a significant challenge. To this end, we propose a multilabel semisupervised model ECGMatch to recognize multiple CVDs simultaneously with limited supervision. In the ECGMatch, an ECGAugment module is developed for weak and strong ECG data augmentation, which generates diverse samples for model training. Subsequently, a hyperparameterefficient framework with neighbor agreement modeling and knowledge distillation is designed for pseudolabel generation and refinement, which mitigates the label scarcity problem. Finally, a label correlation alignment module is proposed to capture the cooccurrence information of different CVDs within labeled samples and propagate this information to unlabeled samples. Extensive experiments on four datasets and three protocols demonstrate the effectiveness and stability of the proposed model, especially on unseen datasets. As such, this model can pave the way for diagnostic systems that achieve robust performance on multilabel CVDs prediction with limited supervision.
SpatialTemporal Graph Learning with Adversarial Contrastive Adaptation ; Spatialtemporal graph learning has emerged as a promising solution for modeling structured spatialtemporal data and learning region representations for various urban sensing tasks such as crime forecasting and traffic flow prediction. However, most existing models are vulnerable to the quality of the generated region graph due to the inaccurate graphstructured information aggregation schema. The ubiquitous spatialtemporal data noise and incompleteness in reallife scenarios pose challenges in generating highquality region representations. To address this challenge, we propose a new spatialtemporal graph learning model GraphST for enabling effective selfsupervised learning. Our proposed model is an adversarial contrastive learning paradigm that automates the distillation of crucial multiview selfsupervised information for robust spatialtemporal graph augmentation. We empower GraphST to adaptively identify hard samples for better selfsupervision, enhancing the representation discrimination ability and robustness. In addition, we introduce a crossview contrastive learning paradigm to model the interdependencies across viewspecific region representations and preserve underlying relation heterogeneity. We demonstrate the superiority of our proposed GraphST method in various spatialtemporal prediction tasks on reallife datasets. We release our model implementation via the link urlhttpsgithub.comHKUDSGraphST.
Wormhole solutions in fR,Lm gravity ; In this work, we intend to explore wormhole geometries in the framework of fR,Lm gravity. We derive the field equations for the generic fR,Lm function by assuming the static and spherically symmetric MorrisThorne wormhole metric. Then we consider two nonlinear fR,Lm model, specifically, fR,LmfracR2Lmalpha and fR,LmfracR21lambda RLm, where alpha and lambda are free model parameters. We obtain the wormhole solutions by assuming three cases, namely, a linear barotropic EoS, anisotropic EoS, and isotropic EoS corresponding to model I. We observe that for both barotropic and anisotropic cases, the corresponding wormhole solutions obey the flaringout condition under asymptotic background, while for the isotropic case, the shape function does not follow the flatness condition. Also, we find that the null energy condition exhibits negative behavior in the vicinity of the throat. Further, we consider two different shape functions to investigate the behavior of model II. We find some constraints on the model parameter for which the violation of the null energy condition exhibits. Finally, we employ the volume integral quantifier to calculate the amount of exotic matter required near the wormhole throat for both models. We conclude that the modification of standard GR can efficiently minimize the use of exotic matter and provide stable traversable wormhole solutions.
Eliminating Lipschitz Singularities in Diffusion Models ; Diffusion models, which employ stochastic differential equations to sample images through integrals, have emerged as a dominant class of generative models. However, the rationality of the diffusion process itself receives limited attention, leaving the question of whether the problem is wellposed and wellconditioned. In this paper, we uncover a vexing propensity of diffusion models they frequently exhibit the infinite Lipschitz near the zero point of timesteps. This poses a threat to the stability and accuracy of the diffusion process, which relies on integral operations. We provide a comprehensive evaluation of the issue from both theoretical and empirical perspectives. To address this challenge, we propose a novel approach, dubbed ETSDM, which eliminates the Lipschitz singularity of the diffusion model near zero. Remarkably, our technique yields a substantial improvement in performance, e.g., on the highresolution FFHQ dataset 256times256. Moreover, as a byproduct of our method, we manage to achieve a dramatic reduction in the Frechet Inception Distance of other acceleration methods relying on network Lipschitz, including DDIM and DPMSolver, by over 33. We conduct extensive experiments on diverse datasets to validate our theory and method. Our work not only advances the understanding of the general diffusion process, but also provides insights for the design of diffusion models.
RetrievingtoAnswer ZeroShot Video Question Answering with Frozen Large Language Models ; Video Question Answering VideoQA has been significantly advanced from the scaling of recent Large Language Models LLMs. The key idea is to convert the visual information into the language feature space so that the capacity of LLMs can be fully exploited. Existing VideoQA methods typically take two paradigms 1 learning crossmodal alignment, and 2 using an offtheshelf captioning model to describe the visual data. However, the first design needs costly training on many extra multimodal data, whilst the second is further limited by limited domain generalization. To address these limitations, a simple yet effective RetrievingtoAnswer R2A framework is proposed.Given an input video, R2A first retrieves a set of semantically similar texts from a generic text corpus using a pretrained multimodal model e.g., CLIP. With both the question and the retrieved texts, a LLM e.g., DeBERTa can be directly used to yield a desired answer. Without the need for crossmodal finetuning, R2A allows for all the key components e.g., LLM, retrieval model, and text corpus to plugandplay. Extensive experiments on several VideoQA benchmarks show that despite with 1.3B parameters and no finetuning, our R2A can outperform the 61 times larger Flamingo80B model even additionally trained on nearly 2.1B multimodal data.
Stochastic fluctuations of diluted pedestrian dynamics along curved paths ; As we walk towards our destinations, our trajectories are constantly influenced by the presence of obstacles and infrastructural elements even in absence of crowding our paths are often curved. Over the last two decades pedestrian dynamics have been extensively studied aiming at quantitative models with both fundamental and technological relevance. Walking kinematics along straight paths have been experimentally investigated and quantitatively modeled in the diluted limit i.e. in absence of pedestrianpedestrian interactions. It is natural to expect that models for straight paths may be an accurate approximations of the dynamics even for paths with curvature radii much larger than the size of a single person. Conversely, as paths curvature increase one may expect larger and larger deviations. As no clear experimental consensus has been reached yet in the literature, here we accurately and systematically investigate the effect of paths curvature on diluted pedestrian dynamics. Thanks to a extensive and highly accurate set of reallife measurements campaign, we derive and validate via a Langevinlike socialforce model capable of quantitatively describing both averages and fluctuations. Leveraging on the differential geometric notion of covariant derivative, we generalize previous work by some of the authors, effectively casting a Langevin socialforce model for the straight walking dynamics in a curved geometric setting. We deem this the necessary first step to understand and model the more general and ubiquitous case of pedestrians following curved paths in the presence of crowd traffic.
OphGLM Training an Ophthalmology Large LanguageandVision Assistant based on Instructions and Dialogue ; Large multimodal language models LMMs have achieved significant success in general domains. However, due to the significant differences between medical images and text and general web content, the performance of LMMs in medical scenarios is limited. In ophthalmology, clinical diagnosis relies on multiple modalities of medical images, but unfortunately, multimodal ophthalmic large language models have not been explored to date. In this paper, we study and construct an ophthalmic large multimodal model. Firstly, we use fundus images as an entry point to build a disease assessment and diagnosis pipeline to achieve common ophthalmic disease diagnosis and lesion segmentation. Then, we establish a new ophthalmic multimodal instructionfollowing and dialogue finetuning dataset based on diseaserelated knowledge data and publicly available realworld medical dialogue. We introduce visual ability into the large language model to complete the ophthalmic large language and vision assistant OphGLM. Our experimental results demonstrate that the OphGLM model performs exceptionally well, and it has the potential to revolutionize clinical applications in ophthalmology. The dataset, code, and models will be made publicly available at httpsgithub.comMLAILabOphGLM.
GIMLET A Unified GraphText Model for InstructionBased Molecule ZeroShot Learning ; Molecule property prediction has gained significant attention in recent years. The main bottleneck is the label insufficiency caused by expensive lab experiments. In order to alleviate this issue and to better leverage textual knowledge for tasks, this study investigates the feasibility of employing natural language instructions to accomplish moleculerelated tasks in a zeroshot setting. We discover that existing moleculetext models perform poorly in this setting due to inadequate treatment of instructions and limited capacity for graphs. To overcome these issues, we propose GIMLET, which unifies language models for both graph and text data. By adopting generalized position embedding, our model is extended to encode both graph structures and instruction text without additional graph encoding modules. GIMLET also decouples encoding of the graph from tasks instructions in the attention mechanism, enhancing the generalization of graph features across novel tasks. We construct a dataset consisting of more than two thousand molecule tasks with corresponding instructions derived from task descriptions. We pretrain GIMLET on the molecule tasks along with instructions, enabling the model to transfer effectively to a broad range of tasks. Experimental results demonstrate that GIMLET significantly outperforms moleculetext baselines in instructionbased zeroshot learning, even achieving closed results to supervised GNN models on tasks such as toxcast and muv.
On the Effective Mass of Mechanical Lattices with Microstructure ; We present a general formalism for the analysis of mechanical lattices with microstructure using the concept of effective mass. We first revisit a classical case of microstructure being modeled by a springinterconnected massinmass cell. The frequencydependent effective mass of the cell is the sum of a static mass and of an added mass, in analogy to that of a swimmer in a fluid. The effective mass is derived using three different methods momentum equivalence, action equivalence, and dynamic condensation. These methods are generalized to mechanical systems with arbitrary microstructure. As an application, we calculate the effective mass of a 1D composite lattice with microstructure modeled by a chiral springinterconnected massinmass cell. A reduced condensed model of the full lattice is then obtained by lumping the microstructure into a single effective mass. A dynamic Bloch analysis is then performed using both the full and reduced lattice models, which give the same spectral results. In particular, the frequency bands follow from the full lattice model by solving a linear eigenvalue problem, or from the reduced lattice model by solving a smaller nonlinear eigenvalue problem. The range of frequencies of negative effective mass falls within the bandgaps of the lattice. Localized modes due to defects in the microstructure have frequencies within the bandgaps, inside the negativemass range. Defects of the outer, or macro stiffness yield localized modes within each bandgap, but outside the negativemass range. The proposed formalism can be applied to study the odd properties of coupled micromacro systems, e.g., active matter.
Approximated Prompt Tuning for VisionLanguage Pretrained Models ; Prompt tuning is a parameterefficient way to deploy largescale pretrained models to downstream tasks by adding taskspecific tokens. In terms of visionlanguage pretrained VLP models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pretraining and downstream tasks, which greatly exacerbates the already high computational overhead. In this paper, we revisit the principle of prompt tuning for Transformerbased VLP models, and reveal that the impact of soft prompt tokens can be actually approximated via independent information diffusion steps, thereby avoiding the expensive global attention modeling and reducing the computational complexity to a large extent. Based on this finding, we propose a novel Approximated Prompt Tuning APT approach towards efficient VL transfer learning. To validate APT, we apply it to two representative VLP models, namely ViLT and METER, and conduct extensive experiments on a bunch of downstream tasks. Meanwhile, the generalization of APT is also validated on CLIP for image classification and StableDiffusion for texttoimage generation. The experimental results not only show the superior performance gains and computation efficiency of APT against the conventional prompt tuning methods, e.g., 7.01 accuracy and 82.30 additional computation overhead on METER, but also confirm its merits over other parameterefficient transfer learning approaches.
SkillNetX A Multilingual Multitask Model with Sparsely Activated Skills ; Traditional multitask learning methods basically can only exploit common knowledge in task or languagewise, which lose either crosslanguage or crosstask knowledge. This paper proposes a general multilingual multitask model, named SkillNetX, which enables a single model to tackle many different tasks from different languages. To this end, we define several languagespecific skills and taskspecific skills, each of which corresponds to a skill module. SkillNetX sparsely activates parts of the skill modules which are relevant either to the target task or the target language. Acting as knowledge transit hubs, skill modules are capable of absorbing taskrelated knowledge and languagerelated knowledge consecutively. Based on Transformer, we modify the multihead attention layer and the feed forward network layer to accommodate skill modules. We evaluate SkillNetX on eleven natural language understanding datasets in four languages. Results show that SkillNetX performs better than taskspecific baselines and two multitask learning baselines i.e., dense joint model and MixtureofExperts model. Furthermore, skill pretraining further improves the performance of SkillNetX on almost all datasets. To investigate the generalization of our model, we conduct experiments on two new tasks and find that SkillNetX significantly outperforms baselines.
MLABIN Modellevel Attention and Batchinstance Style Normalization for Domain Generalization of Federated Learning on Medical Image Segmentation ; The privacy protection mechanism of federated learning FL offers an effective solution for crosscenter medical collaboration and data sharing. In multisite medical image segmentation, each medical site serves as a client of FL, and its data naturally forms a domain. FL supplies the possibility to improve the performance of seen domains model. However, there is a problem of domain generalization DG in the actual deployment, that is, the performance of the model trained by FL in unseen domains will decrease. Hence, MLABIN is proposed to solve the DG of FL in this study. Specifically, the modellevel attention module MLA and batchinstance style normalization BIN block were designed. The MLA represents the unseen domain as a linear combination of seen domain models. The attention mechanism is introduced for the weighting coefficient to obtain the optimal coefficient according to the similarity of interdomain data features. MLA enables the global model to generalize to unseen domain. In the BIN block, batch normalization BN and instance normalization IN are combined to perform the shallow layers of the segmentation network for style normalization, solving the influence of interdomain image style differences on DG. The extensive experimental results of two medical image segmentation tasks demonstrate that the proposed MLABIN outperforms stateoftheart methods.
Abide by the Law and Follow the Flow Conservation Laws for Gradient Flows ; Understanding the geometric properties of gradient descent dynamics is a key ingredient in deciphering the recent success of very large machine learning models. A striking observation is that trained overparameterized models retain some properties of the optimization initialization. This implicit bias is believed to be responsible for some favorable properties of the trained models and could explain their good generalization properties. The purpose of this article is threefold. First, we rigorously expose the definition and basic properties of conservation laws, which are maximal sets of independent quantities conserved during gradient flows of a given model e.g. of a ReLU network with a given architecture with any training data and any loss. Then we explain how to find the exact number of these quantities by performing finitedimensional algebraic manipulations on the Lie algebra generated by the Jacobian of the model. Finally, we provide algorithms implemented in SageMath to a compute a family of polynomial laws; b compute the number of not necessarily polynomial conservation laws. We provide showcase examples that we fully work out theoretically. Besides, applying the two algorithms confirms for a number of ReLU network architectures that all known laws are recovered by the algorithm, and that there are no other laws. Such computational tools pave the way to understanding desirable properties of optimization initialization in large machine learning models.
Colimitation of light and nitrogen on algal growth revealed by an array microhabitat platform ; Microalgae are key players in the global carbon cycle and emerging producers of biofuels. Algal growth is critically regulated by its complex microenvironment, including nitrogen and phosphorous levels, light intensity, and temperature. Mechanistic understanding of algal growth is important for maintaining a balanced ecosystem at a time of climate change and population expansion, as well as providing essential formulations for optimizing biofuel production. Current mathematical models for algal growth in complex environmental conditions are still in their infancy, due in part to the lack of experimental tools necessary to generate data amenable to theoretical modeling. Here, we present a high throughput microfluidic platform that allows for algal growth with precise control over light intensity and nutrient gradients, while also performing realtime microscopic imaging. We propose a general mathematical model that describes algal growth under multiple physical and chemical environments, which we have validated experimentally. We showed that light and nitrogen colimited the growth of the model alga Chlamydomonas reinhardtii following a multiplicative Monod kinetic model. The microfluidic platform presented here can be easily adapted to studies of other photosynthetic microorganisms, and the algal growth model will be essential for future bioreactor designs and ecological predictions.
Uncertainty quantification for the squeeze flow of generalized Newtonian fluids ; The calibration of rheological parameters in the modeling of complex flows of nonNewtonian fluids can be a daunting task. In this paper we demonstrate how the framework of Uncertainty Quantification UQ can be used to improve the predictive capabilities of rheological models in such flow scenarios. For this demonstration, we consider the squeeze flow of generalized Newtonian fluids. To systematically study uncertainties, we have developed a tailored squeeze flow setup, which we have used to perform experiments with glycerol and PVP solution. To mimic these experiments, we have developed a threeregion truncated power law model, which can be evaluated semianalytically. This fasttoevaluate model enables us to consider uncertainty propagation and Bayesian inference using Markov chain Monte Carlo techniques. We demonstrate that with prior information obtained from dedicated experiments most importantly rheological measurements the truncated power law model can adequately predict the experimental results. We observe that when the squeeze flow experiments are incorporated in the analysis in the case of Bayesian inference, this leads to an update of the prior information on the rheological parameters, giving evidence of the need for recalibration in the considered complex flow scenario. In the process of Bayesian inference we also obtain information on quantities of interest that are not directly observable in the experimental data, such as the spatial distribution of the three flow regimes. In this way, besides improving the predictive capabilities of the model, the uncertainty quantification framework enhances the insight into complex flow scenarios.
NNEVP A physics informed neural networkbased elastoviscoplastic framework for predictions of grain sizeaware flow response under large deformations ; We propose a physics informed, neural networkbased elastoviscoplasticity NNEVP constitutive modeling framework for predicting the flow response in metals as a function of underlying grain size. The developed NNEVP algorithm is based on input convex neural networks as a means to strictly enforce thermodynamic consistency, while allowing high expressivity towards model discovery from limited data. It utilizes stateoftheart machine learning tools within PyTorch's highperformance library providing a flexible tool for datadriven, automated constitutive modeling. To test the performance of the framework, we generate synthetic stressstrain curves using a power lawbased model with phenomenological hardening at small strains and test the trained model for strain amplitudes beyond the training data. Next, experimentally measured flow responses obtained from uniaxial deformations are used to train the framework under large plastic deformations. Ultimately, the HallPetch relationship corresponding to grain size strengthening is discovered by training flow response as a function of grain size, also leading to efficient extrapolation. The present work demonstrates a successful integration of neural networks into elastoviscoplastic constitutive laws, providing a robust automated framework for constitutive model discovery that can efficiently generalize, while also providing insights into predictions of flow response and grain sizeproperty relationships in metals and metallic alloys under large plastic deformations.
Modeling evidential cooperation in large worlds ; Evidential cooperation in large worlds ECL refers to the idea that humans and other agents can benefit by cooperating with similar agents with differing values in causally disconnected parts of a large universe. Cooperating provides agents with evidence that other similar agents are likely to cooperate too, resulting in gains from trade for all. This could be a crucial consideration for altruists. I develop a gametheoretic model of ECL as an incomplete information bargaining problem. The model incorporates uncertainty about others' value systems and empirical situations, and addresses the problem of selecting a compromise outcome. Using the model, I investigate issues with ECL and outline open technical and philosophical questions. I show that all cooperators must maximize the same weighted sum of utility functions to reach a Pareto optimal outcome. However, I argue against selecting a compromise outcome implicitly by normalizing utility functions. I review bargaining theory and argue that the Nash bargaining solution could be a relevant Schelling point. I introduce dependency equilibria Spohn 2007, an equilibrium concept suitable for ECL, and generalize a folk theorem showing that the Nash bargaining solution is a dependency equilibrium. I discuss gains from trade given uncertain beliefs about other agents and analyze how these gains decrease in several toy examples as the belief in another agent decreases. Finally, I discuss open issues in my model. First, the Nash bargaining solution is sometimes not coalitionally stable, meaning that a subset of cooperators can unilaterally improve payoffs by deviating from the compromise. I investigate conditions under which stable payoff vectors exist. Second, I discuss how to model agents' default actions without ECL.
Augmenting CLIP with Improved VisioLinguistic Reasoning ; Imagetext contrastive models such as CLIP are useful for a variety of downstream applications including zeroshot classification, imagetext retrieval and transfer learning. However, these contrastively trained visionlanguage models often fail on compositional visiolinguistic tasks such as Winoground with performance equivalent to random chance. In our paper, we address this issue and propose a sampleefficient lightweight method called SDSCLIP to improve the compositional visiolinguistic reasoning capabilities of CLIP. The core idea of our method is to use differentiable image parameterizations to finetune CLIP with a distillation objective from large texttoimage generative models such as StableDiffusion which are relatively good at visiolinguistic reasoning tasks. On the challenging Winoground compositional reasoning benchmark, our method improves the absolute visiolinguistic performance of different CLIP models by up to 7, while on the ARO dataset, our method improves the visiolinguistic performance by upto 3. As a byproduct of inducing visiolinguistic reasoning into CLIP, we also find that the zeroshot performance improves marginally on a variety of downstream datasets. Our method reinforces that carefully designed distillation objectives from generative models can be leveraged to extend existing contrastive imagetext models with improved visiolinguistic reasoning capabilities.
Higgs Criticality beyond the Standard Model ; Both parameters in the Higgs field's potential, its mass and quartic coupling, appear finetuned to nearcritical values, which gives rise to the hierarchy problem and the metastability of the electroweak vacuum. Whereas such behavior appears puzzling in the context of particle physics, it is a common feature of dynamical systems, which has led to the suggestion that the parameters of the Higgs potential could be set through some dynamical process. In this article, we discuss how this notion could be extended to physics beyond the Standard Model SM. We first review in which sense the SM Higgs parameters can be understood as nearcritical and show that this notion can be extrapolated in a unique way for a generic class of SM extensions. Our main result is a prediction for the parameters of such models in terms of their corresponding Standard Model effective field theory Wilson coefficients and corresponding matching scale. For generic models, our result suggests that the scale of new bosonic physics lies close to the instability scale. We explore the potentially observable consequences of this connection, and illustrate aspects of our analysis with a concrete example. Lastly, we discuss implications of our results for several mechanisms of dynamical vacuum selection associated with various BeyondStandardModel BSM constructions.
Origin of Hilbert space quantum scars in unconstrained models ; Quantum manybody scar is a recently discovered phenomenon weakly violating eigenstate thermalization hypothesis, and it has been extensively studied across various models. However, experimental realizations are mainly based on constrained models such as the PXP model. Inspired by recent experimental observations on the superconducting platform in Refs.Nat. Phys. 19, 120 2022 and arXiv2211.05803, we study a distinct class of quantum manybody scars based on a halffilling hardcore BoseHubbard model, which is generic to describe in many experimental platforms. It is the socalled Hilbert space quantum scar as it originates from a subspace with a hypercube geometry weakly connecting to other thermalization regions in Hilbert space. Within the hypercube, a pair of collective Fock states do not directly connect to the thermalization region, resulting in slow thermalization dynamics with remarkable fidelity revivals with distinct differences from dynamics of other initial states. This mechanism is generic in various realspace lattice configurations, including onedimensional SuSchriefferHeeger chain, comb lattice, and even random dimer clusters consisting of dimers. In addition, we develop a toy model based on Hilbert hypercube decay approximation, to explain the spectrum overlap between the collective states and all eigenstates. Furthermore, we explore the Hilbert space quantum scar in two and threedimensional SuSchriefferHeeger manybody systems, consisting of tetramers or octamers, respectively. This study makes quantum manybody scar state more realistic in applications such as quantum sensing and quantum metrology.
SimtoReal ModelBased and ModelFree Deep Reinforcement Learning for Tactile Pushing ; Object pushing presents a key nonprehensile manipulation problem that is illustrative of more complex robotic manipulation tasks. While deep reinforcement learning RL methods have demonstrated impressive learning capabilities using visual input, a lack of tactile sensing limits their capability for fine and reliable control during manipulation. Here we propose a deep RL approach to object pushing using tactile sensing without visual input, namely tactile pushing. We present a goalconditioned formulation that allows both modelfree and modelbased RL to obtain accurate policies for pushing an object to a goal. To achieve realworld performance, we adopt a simtoreal approach. Our results demonstrate that it is possible to train on a single object and a limited sample of goals to produce precise and reliable policies that can generalize to a variety of unseen objects and pushing scenarios without domain randomization. We experiment with the trained agents in harsh pushing conditions, and show that with significantly more training samples, a modelfree policy can outperform a modelbased planner, generating shorter and more reliable pushing trajectories despite large disturbances. The simplicity of our training environment and effective realworld performance highlights the value of rich tactile information for fine manipulation. Code and videos are available at httpssites.google.comviewtactilerlpushing.
MetricBased Incontext Learning A Case Study in Text Simplification ; Incontext learning ICL for large language models has proven to be a powerful approach for many natural language processing tasks. However, determining the best method to select examples for ICL is nontrivial as the results can vary greatly depending on the quality, quantity, and order of examples used. In this paper, we conduct a case study on text simplification TS to investigate how to select the best and most robust examples for ICL. We propose MetricBased incontext Learning MBL method that utilizes commonly used TS metrics such as SARI, compression ratio, and BERTPrecision for selection. Through an extensive set of experiments with varioussized GPT models on standard TS benchmarks such as TurkCorpus and ASSET, we show that examples selected by the top SARI scores perform the best on larger models such as GPT175B, while the compression ratio generally performs better on smaller models such as GPT13B and GPT6.7B. Furthermore, we demonstrate that MBL is generally robust to example orderings and outofdomain test sets, and outperforms strong baselines and stateoftheart finetuned language models. Finally, we show that the behaviour of large GPT models can be implicitly controlled by the chosen metric. Our research provides a new framework for selecting examples in ICL, and demonstrates its effectiveness in text simplification tasks, breaking new ground for more accurate and efficient NLG systems.
RSGPT A Remote Sensing Vision Language Model and Benchmark ; The emergence of largescale large language models, with GPT4 as a prominent example, has significantly propelled the rapid advancement of artificial general intelligence and sparked the revolution of Artificial Intelligence 2.0. In the realm of remote sensing RS, there is a growing interest in developing large vision language models VLMs specifically tailored for data analysis in this domain. However, current research predominantly revolves around visual recognition tasks, lacking comprehensive, largescale imagetext datasets that are aligned and suitable for training large VLMs, which poses significant challenges to effectively training such models for RS applications. In computer vision, recent research has demonstrated that finetuning large vision language models on smallscale, highquality datasets can yield impressive performance in visual and language understanding. These results are comparable to stateoftheart VLMs trained from scratch on massive amounts of data, such as GPT4. Inspired by this captivating idea, in this work, we build a highquality Remote Sensing Image Captioning dataset RSICap that facilitates the development of large VLMs in the RS field. Unlike previous RS datasets that either employ modelgenerated captions or short descriptions, RSICap comprises 2,585 humanannotated captions with rich and highquality information. This dataset offers detailed descriptions for each image, encompassing scene descriptions e.g., residential area, airport, or farmland as well as object information e.g., color, shape, quantity, absolute position, etc. To facilitate the evaluation of VLMs in the field of RS, we also provide a benchmark evaluation dataset called RSIEval. This dataset consists of humanannotated captions and visual questionanswer pairs, allowing for a comprehensive assessment of VLMs in the context of RS.
Robust Distortionfree Watermarks for Language Models ; We propose a methodology for planting watermarks in text from an autoregressive language model that are robust to perturbations without changing the distribution over text up to a certain maximum generation budget. We generate watermarked text by mapping a sequence of random numbers which we compute using a randomized watermark key to a sample from the language model. To detect watermarked text, any party who knows the key can align the text to the random number sequence. We instantiate our watermark methodology with two sampling schemes inverse transform sampling and exponential minimum sampling. We apply these watermarks to three language models OPT1.3B, LLaMA7B and Alpaca7B to experimentally validate their statistical power and robustness to various paraphrasing attacks. Notably, for both the OPT1.3B and LLaMA7B models, we find we can reliably detect watermarked text p leq 0.01 from 35 tokens even after corrupting between 4050 of the tokens via random edits i.e., substitutions, insertions or deletions. For the Alpaca7B model, we conduct a case study on the feasibility of watermarking responses to typical user instructions. Due to the lower entropy of the responses, detection is more difficult around 25 of the responses whose median length is around 100 tokens are detectable with p leq 0.01, and the watermark is also less robust to certain automated paraphrasing attacks we implement.
Rising Tides Analytic Modeling of Tidal Effects in Binary Neutron Star Mergers ; The gravitational waves produced by binary neutron star mergers offer a unique window into matter behavior under extreme conditions. In this context, we model analytically the effect of matter on the gravitational waves from binary neutron star mergers. We start with a binary black hole system, leveraging the postNewtonian formalism for the inspiral and the BackwardsoneBody model for the merger. We combine the two methods to generate a baseline waveform and we validate our results against numerical relativity simulations. Next, we integrate tidal effects in phase and amplitude to account for matter and spacetime interaction, by using the NRTidal model, and test its accuracy against numerical relativity predictions, for two equations of state, finding a mismatch around the merger. Subsequently, we lift the restriction on the coefficients to be independent of the tidal deformability, and recalibrate them using the numerical relativity predictions. We obtain better fits for phase and amplitude around the merger, and are able to extend the phase modeling beyond the merger. We implement our method in a new opensource Python code, steered by a Jupyter Notebook. Our research offers new perspectives on analytically modeling the effect of tides on the gravitational waves from binary neutron star mergers.
Interpreting Observed Interactions between NearInertial Waves and Mesoscale Eddies ; The evolution of windgenerated nearinertial waves NIWs is known to be influenced by the mesoscale eddy field, yet it remains a challenge to disentangle the effects of this interaction in observations. NIWs are often modeled using a slab mixedlayer model with no horizontal structure. Here, the theoretical model of Young and Ben Jelloul, which describes the evolution of NIWs in the presence of a slowlyevolving mesoscale eddy field, is compared to observations from a mooring array in the Northeast Atlantic Ocean. The model captures the evolution of both the observed NIW amplitude and phase much more accurately than the slab mixedlayer model, and it allows attributing the evolution to specific physical processes. The model reveals that differences in NIW amplitude across the mooring array are caused by refraction of NIWs into anticyclones. Advection and wave dispersion also make nonnegligible contributions to the observed wave evolution. Stimulated generation, a process by which mesoscale kinetic energy acts as a source of NIW potential energy, is estimated to be 20muW in the region of the mooring array. This is two orders of magnitude smaller than the global average input to mesoscale kinetic energy and likely not an important contribution to the mesoscale kinetic energy budget in this region.
Experimentally realized physicalmodelbased wave control in metasurfaceprogrammable complex media ; The reconfigurability of radio environments with programmable metasurfaces is considered a key feature of nextgeneration wireless networks. Identifying suitable metasurface configurations for desired wireless functionalities requires a precise settingspecific understanding of the intricate impact of the metasurface configuration on the wireless channels. Yet, to date, the relevant short and longrange correlations between the metaatoms due to proximity and reverberation are largely ignored rather than precisely captured. Here, we experimentally demonstrate that a compact model derived from first physical principles can precisely predict how wireless channels in complex scattering environments depend on the programmablemetasurface configuration. The model is calibrated using a very small random subset of all possible metasurface configurations and without knowing the setup's geometry. Our approach achieves two orders of magnitude higher precision than a deep learningbased digitaltwin benchmark while involving hundred times fewer parameters. Strikingly, when only phaseless calibration data is available, our model can nonetheless retrieve the precise phase relations of the scattering matrix as well as their dependencies on the metasurface configuration. Thereby, we achieve coherent wave control focusing or enhancing absorption and phaseshiftkeying backscatter communications without ever having measured phase information. Finally, our model is also capable of retrieving the essential properties of scattering coefficients for which no calibration data was ever provided. These unique generalization capabilities of our purephysics model significantly alleviate the measurement complexity. Our approach is also directly relevant to dynamic metasurface antennas, microwavebased signal processors as well as emerging in situ reconfigurable nanophotonic, optical and roomacoustical systems.
StableVQA A Deep NoReference Quality Assessment Model for Video Stability ; Video shakiness is an unpleasant distortion of User Generated Content UGC videos, which is usually caused by the unstable hold of cameras. In recent years, many video stabilization algorithms have been proposed, yet no specific and accurate metric enables comprehensively evaluating the stability of videos. Indeed, most existing quality assessment models evaluate video quality as a whole without specifically taking the subjective experience of video stability into consideration. Therefore, these models cannot measure the video stability explicitly and precisely when severe shakes are present. In addition, there is no largescale video database in public that includes various degrees of shaky videos with the corresponding subjective scores available, which hinders the development of Video Quality Assessment for Stability VQAS. To this end, we build a new database named StableDB that contains 1,952 diverselyshaky UGC videos, where each video has a Mean Opinion Score MOS on the degree of video stability rated by 34 subjects. Moreover, we elaborately design a novel VQAS model named StableVQA, which consists of three feature extractors to acquire the optical flow, semantic, and blur features respectively, and a regression layer to predict the final stability score. Extensive experiments demonstrate that the StableVQA achieves a higher correlation with subjective opinions than the existing VQAS models and generic VQA models. The database and codes are available at httpsgithub.comQMMEStableVQA.
A Neural Network Based Choice Model for Assortment Optimization ; Discretechoice models are used in economics, marketing and revenue management to predict customer purchase probabilities, say as a function of prices and other features of the offered assortment. While they have been shown to be expressive, capturing customer heterogeneity and behaviour, they are also hard to estimate, often based on many unobservables like utilities; and moreover, they still fail to capture many salient features of customer behaviour. A natural question then, given their success in other contexts, is if neural networks can eliminate the necessity of carefully building a contextdependent customer behaviour model and handcoding and tuning the estimation. It is unclear however how one would incorporate assortment effects into such a neural network, and also how one would optimize the assortment with such a blackbox generative model of choice probabilities. In this paper we investigate first whether a single neural network architecture can predict purchase probabilities for datasets from various contexts and generated under various models and assumptions. Next, we develop an assortment optimization formulation that is solvable by offtheshelf integer programming solvers. We compare against a variety of benchmark discretechoice models on simulated as well as realworld datasets, developing training tricks along the way to make the neural network prediction and subsequent optimization robust and comparable in performance to the alternates.
Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods ; Stochastic gradient descent SGD and adaptive gradient methods, such as Adam and RMSProp, have been widely used in training deep neural networks. We empirically show that while the difference between the standard generalization performance of models trained using these methods is small, those trained using SGD exhibit far greater robustness under input perturbations. Notably, our investigation demonstrates the presence of irrelevant frequencies in natural datasets, where alterations do not affect models' generalization performance. However, models trained with adaptive methods show sensitivity to these changes, suggesting that their use of irrelevant frequencies can lead to solutions sensitive to perturbations. To better understand this difference, we study the learning dynamics of gradient descent GD and sign gradient descent signGD on a synthetic dataset that mirrors natural signals. With a threedimensional input space, the models optimized with GD and signGD have standard risks close to zero but vary in their adversarial risks. Our result shows that linear models' robustness to ell2norm bounded changes is inversely proportional to the model parameters' weight norm a smaller weight norm implies better robustness. In the context of deep learning, our experiments show that SGDtrained neural networks show smaller Lipschitz constants, explaining the better robustness to input perturbations than those trained with adaptive gradient methods.
Towards OpenSet TestTime Adaptation Utilizing the Wisdom of Crowds in Entropy Minimization ; Testtime adaptation TTA methods, which generally rely on the model's predictions e.g., entropy minimization to adapt the source pretrained model to the unlabeled target domain, suffer from noisy signals originating from 1 incorrect or 2 openset predictions. Longterm stable adaptation is hampered by such noisy signals, so training models without such error accumulation is crucial for practical TTA. To address these issues, including openset TTA, we propose a simple yet effective sample selection method inspired by the following crucial empirical finding. While entropy minimization compels the model to increase the probability of its predicted label i.e., confidence values, we found that noisy samples rather show decreased confidence values. To be more specific, entropy minimization attempts to raise the confidence values of an individual sample's prediction, but individual confidence values may rise or fall due to the influence of signals from numerous other predictions i.e., wisdom of crowds. Due to this fact, noisy signals misaligned with such 'wisdom of crowds', generally found in the correct signals, fail to raise the individual confidence values of wrong samples, despite attempts to increase them. Based on such findings, we filter out the samples whose confidence values are lower in the adapted model than in the original model, as they are likely to be noisy. Our method is widely applicable to existing TTA methods and improves their longterm adaptation performance in both image classification e.g., 49.4 reduced error rates with TENT and semantic segmentation e.g., 11.7 gain in mIoU with TENT.
Charged strange star coupled to anisotropic dark energy in TolmanKuchowicz spacetime ; The concept of dark energy can be used as a possible option to prevent the gravitational collapse of compact objects into singularities. It affects the universe on the largest scale, as it is responsible for our universe's accelerated expansion. As a consequence, it seems possible that dark energy will interact with any compact astrophysical stellar object Phys. Rev. D 103, 084042 2021. In this work, our prime focus is to develop a simplified model of a charged strange star coupled to anisotropic dark energy in TolmanKuchowicz spacetime Tolman, Phys Rev 55364, 1939; Kuchowicz, Acta Phys Pol 33541, 1968 within the context of general relativity. To develop our model, here we consider a particular strange star object, Her X1 with observed values of mass 0.85 pm 0.15Modot and radius 8.10.410.41 km. respectively. In this context, we initially started with the equation of state EoS to model the dark energy, in which the dark energy density is proportional to the isotropic perfect fluid matterenergy density. The unknown constants present in the metric have been calculated by using the DarmoisIsrael condition. We perform an indepth analysis of the stability and force equilibrium of our proposed stellar configuration as well as multiple physical attributes of the model such as metric function, pressure, density, massradius relation, and dark energy parameters by varying dark energy coupling parameter alpha. Thus after a thorough theoretical analysis, we found that our proposed model is free from any singularity and also satisfies all stability criteria to be a stable and physically realistic stellar model.
PUMGPT A Large VisionLanguage Model for Product Understanding ; Recent developments of multimodal large language models have demonstrated its strong ability in solving visionlanguage tasks. In this paper, we focus on the product understanding task, which plays an essential role in enhancing online shopping experience. Product understanding task includes a variety of subtasks, which require models to respond diverse queries based on multimodal product information. Traditional methods design distinct model architectures for each subtask. On the contrary, we present PUMGPT, a large visionlanguage model aims at unifying all product understanding tasks under a singular model structure. To bridge the gap between vision and text representations, we propose Layerwise Adapters LA, an approach that provides enhanced alignment with fewer visual tokens and enables parameterefficient finetuning. Moreover, the inherent parameterefficient finetuning ability allows PUMGPT to be readily adapted to new product understanding tasks and emerging products. We design instruction templates to generate diverse product instruction datasets. Simultaneously, we utilize opendomain datasets during training to improve the performance of PUMGPT and its generalization ability. Through extensive evaluations, PUMGPT demonstrates its superior performance across multiple product understanding tasks, including product captioning, category questionanswering, attribute extraction, attribute questionanswering, and even freeform questionanswering about products.
A Semiautomatic Oriental Ink Painting Framework for Robotic Drawing from 3D Models ; Creating visually pleasing stylized ink paintings from 3D models is a challenge in robotic manipulation. We propose a semiautomatic framework that can extract expressive strokes from 3D models and draw them in oriental ink painting styles by using a robotic arm. The framework consists of a simulation stage and a robotic drawing stage. In the simulation stage, geometrical contours were automatically extracted from a certain viewpoint and a neural network was employed to create simplified contours. Then, expressive digital strokes were generated after interactive editing according to user's aesthetic understanding. In the robotic drawing stage, an optimization method was presented for drawing smooth and physically consistent strokes to the digital strokes, and two oriental ink painting styles termed as Noutan shade and Kasure scratchiness were applied to the strokes by robotic control of a brush's translation, dipping and scraping. Unlike existing methods that concentrate on generating paintings from 2D images, our framework has the advantage of rendering stylized ink paintings from 3D models by using a consumergrade robotic arm. We evaluate the proposed framework by taking 3 standard models and a userdefined model as examples. The results show that our framework is able to draw visually pleasing oriental ink paintings with expressive strokes.
Approaching human 3D shape perception with neurally mappable models ; Humans effortlessly infer the 3D shape of objects. What computations underlie this ability Although various computational models have been proposed, none of them capture the human ability to match object shape across viewpoints. Here, we ask whether and how this gap might be closed. We begin with a relatively novel class of computational models, 3D neural fields, which encapsulate the basic principles of classic analysisbysynthesis in a deep neural network DNN. First, we find that a 3D Light Field Network 3DLFN supports 3D matching judgments well aligned to humans for withincategory comparisons, adversariallydefined comparisons that accentuate the 3D failure cases of standard DNN models, and adversariallydefined comparisons for algorithmically generated shapes with no category structure. We then investigate the source of the 3DLFN's ability to achieve humanaligned performance through a series of computational experiments. Exposure to multiple viewpoints of objects during training and a multiview learning objective are the primary factors behind modelhuman alignment; even conventional DNN architectures come much closer to human behavior when trained with multiview objectives. Finally, we find that while the models trained with multiview learning objectives are able to partially generalize to new object categories, they fall short of human alignment. This work provides a foundation for understanding human shape inferences within neurally mappable computational architectures.
Score diffusion models without early stopping finite Fisher information is all you need ; Diffusion models are a new class of generative models that revolve around the estimation of the score function associated with a stochastic differential equation. Subsequent to its acquisition, the approximated score function is then harnessed to simulate the corresponding timereversal process, ultimately enabling the generation of approximate data samples. Despite their evident practical significance these models carry, a notable challenge persists in the form of a lack of comprehensive quantitative results, especially in scenarios involving nonregular scores and estimators. In almost all reported bounds in Kullback Leibler KL divergence, it is assumed that either the score function or its approximation is Lipschitz uniformly in time. However, this condition is very restrictive in practice or appears to be difficult to establish. To circumvent this issue, previous works mainly focused on establishing convergence bounds in KL for an early stopped version of the diffusion model and a smoothed version of the data distribution, or assuming that the data distribution is supported on a compact manifold. These explorations have lead to interesting bounds in either Wasserstein or FortetMourier metrics. However, the question remains about the relevance of such earlystopping procedure or compactness conditions. In particular, if there exist a natural and mild condition ensuring explicit and sharp convergence bounds in KL. In this article, we tackle the aforementioned limitations by focusing on score diffusion models with fixed step size stemming from the OrnsteinUlhenbeck semigroup and its kinetic counterpart. Our study provides a rigorous analysis, yielding simple, improved and sharp convergence bounds in KL applicable to any data distribution with finite Fisher information with respect to the standard Gaussian distribution.
How to Evaluate the Generalization of Detection A Benchmark for Comprehensive OpenVocabulary Detection ; Object detection OD in computer vision has made significant progress in recent years, transitioning from closedset labels to openvocabulary detection OVD based on largescale visionlanguage pretraining VLP. However, current evaluation methods and datasets are limited to testing generalization over object types and referral expressions, which do not provide a systematic, finegrained, and accurate benchmark of OVD models' abilities. In this paper, we propose a new benchmark named OVDEval, which includes 9 subtasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision AP metric when benchmarking models on these finegrained label datasets and propose a new metric called NonMaximum Suppression Average Precision NMSAP to address this issue. Extensive experimental results show that existing top OVD models all fail on the new tasks except for simple object types, demonstrating the value of the proposed dataset in pinpointing the weakness of current OVD models and guiding future research. Furthermore, the proposed NMSAP metric is verified by experiments to provide a much more truthful evaluation of OVD models, whereas traditional AP metrics yield deceptive results. Data is available at urlhttpsgithub.comomailabOVDEval
RegionDisentangled Diffusion Model for HighFidelity PPGtoECG Translation ; The high prevalence of cardiovascular diseases CVDs calls for accessible and costeffective continuous cardiac monitoring tools. Despite Electrocardiography ECG being the gold standard, continuous monitoring remains a challenge, leading to the exploration of Photoplethysmography PPG, a promising but more basic alternative available in consumer wearables. This notion has recently spurred interest in translating PPG to ECG signals. In this work, we introduce RegionDisentangled Diffusion Model RDDM, a novel diffusion model designed to capture the complex temporal dynamics of ECG. Traditional Diffusion models like Denoising Diffusion Probabilistic Models DDPM face challenges in capturing such nuances due to the indiscriminate noise addition process across the entire signal. Our proposed RDDM overcomes such limitations by incorporating a novel forward process that selectively adds noise to specific regions of interest ROI such as QRS complex in ECG signals, and a reverse process that disentangles the denoising of ROI and nonROI regions. Quantitative experiments demonstrate that RDDM can generate highfidelity ECG from PPG in as few as 10 diffusion steps, making it highly effective and computationally efficient. Additionally, to rigorously validate the usefulness of the generated ECG signals, we introduce CardioBench, a comprehensive evaluation benchmark for a variety of cardiacrelated tasks including heart rate and blood pressure estimation, stress classification, and the detection of atrial fibrillation and diabetes. Our thorough experiments show that RDDM achieves stateoftheart performance on CardioBench. To the best of our knowledge, RDDM is the first diffusion model for crossmodal signaltosignal translation in the biosignal domain.
The DeepZen Speech Synthesis System for Blizzard Challenge 2023 ; This paper describes the DeepZen text to speech TTS system for Blizzard Challenge 2023. The goal of this challenge is to synthesise natural and highquality speech in French, from a large monospeaker dataset hub task and from a smaller dataset by speaker adaptation spoke task. We participated to both tasks with the same model architecture. Our approach has been to use an autoregressive model, which retains an advantage for generating natural sounding speech but to improve prosodic control in several ways. Similarly to nonattentive Tacotron, the model uses a duration predictor and gaussian upsampling at inference, but with a simpler unsupervised training. We also model the speaking style at both sentence and word levels by extracting global and local style tokens from the reference speech. At inference, the global and local style tokens are predicted from a BERT model run on text. This BERT model is also used to predict specific pronunciation features like schwa elision and optional liaisons. Finally, a modified version of HifiGAN trained on a large public dataset and finetuned on the target voices is used to generate speech waveform. Our team is identified as O in the the Blizzard evaluation and MUSHRA test results show that our system performs second ex aequo in both hub task median score of 0.75 and spoke task median score of 0.68, over 18 and 14 participants, respectively.
Sparkles Unlocking Chats Across Multiple Images for Multimodal InstructionFollowing Models ; Large language models exhibit enhanced zeroshot performance on various tasks when finetuned with instructionfollowing data. Multimodal instructionfollowing models extend these capabilities by integrating both text and images. However, existing models such as MiniGPT4 face challenges in maintaining dialogue coherence in scenarios involving multiple images. A primary reason is the lack of a specialized dataset for this critical application. To bridge these gaps, we present SparklesChat, a multimodal instructionfollowing model for openended dialogues across multiple images. To support the training, we introduce SparklesDialogue, the first machinegenerated dialogue dataset tailored for wordlevel interleaved multiimage and text interactions. Furthermore, we construct SparklesEval, a GPTassisted benchmark for quantitatively assessing a model's conversational competence across multiple images and dialogue turns. Our experiments validate the effectiveness of SparklesChat in understanding and reasoning across multiple images and dialogue turns. Specifically, SparklesChat outperformed MiniGPT4 on established visionandlanguage benchmarks, including the BISON binary image selection task and the NLVR2 visual reasoning task. Moreover, SparklesChat scored 8.56 out of 10 on SparklesEval, substantially exceeding MiniGPT4's score of 3.91 and nearing GPT4's score of 9.26. Qualitative evaluations further demonstrate SparklesChat's generality in handling realworld applications. All resources are available at httpsgithub.comHYPJUDYSparkles.
DualDecoder Consistency via PseudoLabels Guided Data Augmentation for SemiSupervised Medical Image Segmentation ; Medical image segmentation methods often rely on fully supervised approaches to achieve excellent performance, which is contingent upon having an extensive set of labeled images for training. However, annotating medical images is both expensive and timeconsuming. Semisupervised learning offers a solution by leveraging numerous unlabeled images alongside a limited set of annotated ones. In this paper, we introduce a semisupervised medical image segmentation method based on the meanteacher model, referred to as DualDecoder Consistency via PseudoLabels Guided Data Augmentation DCPA. This method combines consistency regularization, pseudolabels, and data augmentation to enhance the efficacy of semisupervised segmentation. Firstly, the proposed model comprises both student and teacher models with a shared encoder and two distinct decoders employing different upsampling strategies. Minimizing the output discrepancy between decoders enforces the generation of consistent representations, serving as regularization during student model training. Secondly, we introduce mixup operations to blend unlabeled data with labeled data, creating mixed data and thereby achieving data augmentation. Lastly, pseudolabels are generated by the teacher model and utilized as labels for mixed data to compute unsupervised loss. We compare the segmentation results of the DCPA model with six stateoftheart semisupervised methods on three publicly available medical datasets. Beyond classical 10 and 20 semisupervised settings, we investigate performance with less supervision 5 labeled data. Experimental outcomes demonstrate that our approach consistently outperforms existing semisupervised medical image segmentation methods across the three semisupervised settings.
Optimal Scaling transformations to model nonlinear relations in GLMs with ordered and unordered predictors ; In Generalized Linear Models GLMs it is assumed that there is a linear effect of the predictor variables on the outcome. However, this assumption is often too strict, because in many applications predictors have a nonlinear relation with the outcome. Optimal Scaling OS transformations combined with GLMs can deal with this type of relations. Transformations of the predictors have been integrated in GLMs before, e.g. in Generalized Additive Models. However, the OS methodology has several benefits. For example, the levels of categorical predictors are quantified directly, such that they can be included in the model without defining dummy variables. This approach enhances the interpretation and visualization of the effect of different levels on the outcome. Furthermore, monotonicity restrictions can be applied to the OS transformations such that the original ordering of the category values is preserved. This improves the interpretation of the effect and may prevent overfitting. The scaling level can be chosen for each individual predictor such that models can include mixed scaling levels. In this way, a suitable transformation can be found for each predictor in the model. The implementation of OS in logistic regression is demonstrated using three datasets that contain a binary outcome variable and a set of categorical andor continuous predictor variables.
Large Process Models Business Process Management in the Age of Generative AI ; The continued success of Large Language Models LLMs and other generative artificial intelligence approaches highlights the advantages that large information corpora can have over rigidly defined symbolic models, but also serves as a proofpoint of the challenges that purely statisticsbased approaches have in terms of safety and trustworthiness. As a framework for contextualizing the potential, as well as the limitations of LLMs and other foundation modelbased technologies, we propose the concept of a Large Process Model LPM that combines the correlation power of LLMs with the analytical precision and reliability of knowledgebased systems and automated reasoning approaches. LPMs are envisioned to directly utilize the wealth of process management experience that experts have accumulated, as well as process performance data of organizations with diverse characteristics, e.g., regarding size, region, or industry. In this vision, the proposed LPM would allow organizations to receive contextspecific tailored process and other business models, analytical deepdives, and improvement recommendations. As such, they would allow to substantially decrease the time and effort required for business transformation, while also allowing for deeper, more impactful, and more actionable insights than previously possible. We argue that implementing an LPM is feasible, but also highlight limitations and research challenges that need to be solved to implement particular aspects of the LPM vision.
Cone beam neutron interferometry from modeling to applications ; Phasegrating moire interferometers PGMIs have emerged as promising candidates for the next generation of neutron interferometry, enabling the use of a polychromatic beam and manifesting interference patterns that can be directly imaged by existing neutron cameras. However, the modeling of the various PGMI configurations is limited to cumbersome numerical calculations and backward propagation models which often do not enable one to explore the setup parameters. Here we generalize the Fresnel scaling theorem to introduce a kspace model for PGMI setups illuminated by a cone beam, thus enabling an intuitive forward propagation model for a wide range of parameters. The interference manifested by a PGMI is shown to be a special case of the Talbot effect, and the optimal fringe visibility is shown to occur at the moire location of the Talbot distances. We derive analytical expressions for the contrast and the propagating intensity profiles in various conditions, and analyze the behaviour of the darkfield imaging signal when considering sample characterization. The model's predictions are compared to experimental measurements and good agreement is found between them. Lastly, we propose and experimentally verify a method to recover contrast at typically inaccessible PGMI autocorrelation lengths. The presented work provides a toolbox for analyzing and understanding existing PGMI setups and their future applications, for example extensions to twodimensional PGMIs and characterization of samples with nontrivial structures.
Persisting quantum effects in the anisotropic Rabi model at thermal equilibrium ; Quantum correlations and nonclassical states are at the heart of emerging quantum technologies. Efforts to produce longlived states of such quantum resources are a subject of tireless pursuit. Among several platforms useful for quantum technology, the mature quantum system of lightmatter interactions offers unprecedented advantages due to current onchip nanofabrication, efficient quantum control of its constituents, and its wide range of operational regimes. Recently, a continuous transition between the JaynesCummings model and the Rabi model has been proposed by exploiting anisotropies in their lightmatter interactions, known as the anisotropic quantum Rabi model. In this work, we study the longlived quantum correlations and nonclassical states generated in the anisotropic Rabi model and how these indeed persist even at thermal equilibrium. To achieve this, we thoroughly analyze several quantumness quantifiers, where the longlived quantum state is obtained from a dressed master equation that is valid for all coupling regimes and with the steady state ensured to be the canonical Gibbs state. Furthermore, we demonstrate a stark distinction between virtual excitations produced beyond the strong coupling regime and the quantumness quantifiers once the lightmatter interaction has been switched off. This raises the key question about the nature of the equilibrium quantum features generated in the anisotropic quantum Rabi model and paves the way for future experimental investigations, without the need for challenging groundstate cooling.