text
stringlengths
133
1.92k
summary
stringlengths
24
228
We answer a 15-year-old open question about the exact upper bound for bivariate copulas with a given diagonal section by giving an explicit formula for this bound. As an application, we determine the maximal asymmetry of bivariate copulas with a given diagonal section and construct a copula that attains it.
Exact upper bound for copulas with a given diagonal section
Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding $10^{10}$ on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to $N_\mathrm{max} = 22$ and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.
Large-scale exact diagonalizations reveal low-momentum scales of nuclei
Reduced-order models that accurately abstract high fidelity models and enable faster simulation is vital for real-time, model-based diagnosis applications. In this paper, we outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models to generate reduced-order models from high fidelity models. We are using such models for real-time diagnosis applications. Specifically, we have developed machine learning inspired representations to generate reduced order component models that preserve, in part, the physical interpretation of the original high fidelity component models. To ensure the accuracy, scalability and numerical stability of the learning algorithms when training the reduced-order models we use optimization platforms featuring automatic differentiation. Training data is generated by simulating the high-fidelity model. We showcase our approach in the context of fault diagnosis of a rail switch system. Three new model abstractions whose complexities are two orders of magnitude smaller than the complexity of the high fidelity model, both in the number of equations and simulation time are shown. The numerical experiments and results demonstrate the efficacy of the proposed hybrid modeling approach.
Hybrid modeling: Applications in real-time diagnosis
The inferred values of the cosmological baryon and dark matter densities are strikingly similar, but in most theories of the early universe there is no true explanation of this fact; in particular, the baryon asymmetry and thus density depends upon unknown, and {\it a priori} unknown and possibly small, CP-violating phases which are independent of all parameters determining the dark matter density. We consider models of dark matter possessing a particle-antiparticle asymmetry where this asymmetry determines both the baryon asymmetry and strongly effects the dark matter density, thus naturally linking $\Omega_{\rm{b}}$ and $\Omega_{\rm{dm}}$. We show that sneutrinos can play the role of such dark matter in a previously studied variant of the MSSM in which the light neutrino masses result from higher-dimensional supersymmetry-breaking terms.
Asymmetric Sneutrino Dark Matter and the Omega(b)/Omega(DM) Puzzle
We show that for a relation $f\subseteq \{0,1\}^n\times \mathcal{O}$ and a function $g:\{0,1\}^{m}\times \{0,1\}^{m} \rightarrow \{0,1\}$ (with $m= O(\log n)$), $$\mathrm{R}_{1/3}(f\circ g^n) = \Omega\left(\mathrm{R}_{1/3}(f) \cdot \left(\log\frac{1}{\mathrm{disc}(M_g)} - O(\log n)\right)\right),$$ where $f\circ g^n$ represents the composition of $f$ and $g^n$, $M_g$ is the sign matrix for $g$, $\mathrm{disc}(M_g)$ is the discrepancy of $M_g$ under the uniform distribution and $\mathrm{R}_{1/3}(f)$ ($\mathrm{R}_{1/3}(f\circ g^n)$) denotes the randomized query complexity of $f$ (randomized communication complexity of $f\circ g^n$) with worst case error $\frac{1}{3}$. In particular, this implies that for a relation $f\subseteq \{0,1\}^n\times \mathcal{O}$, $$\mathrm{R}_{1/3}(f\circ \mathrm{IP}_m^n) = \Omega\left(\mathrm{R}_{1/3}(f) \cdot m\right),$$ where $\mathrm{IP}_m:\{0,1\}^m\times \{0,1\}^m\rightarrow \{0,1\}$ is the Inner Product (modulo $2$) function and $m= O(\log(n))$.
Lifting randomized query complexity to randomized communication complexity
Intrinsic alignments of galaxies are recognised as one of the most important systematic in weak lensing surveys on small angular scales. In this paper we investigate ellipticity correlation functions that are measured separately on elliptical and spiral galaxies, for which we assume the generic alignment mechanisms based on tidal shearing and tidal torquing, respectively. Including morphological information allows to find linear combinations of measured ellipticity correlation functions which suppress the gravitational lensing signal completely or which show a strongly boosted gravitational lensing signal relative to intrinsic alignments. Specifically, we find that $(i)$ intrinsic alignment spectra can be measured in a model-independent way at a significance of $\Sigma\simeq 60$ with a wide-angle tomographic survey such as Euclid's, $(ii)$ intrinsic alignment model parameters can be determined at percent-level precision, $(iii)$ this measurement is not impeded by misclassifying galaxies and assuming a wrong alignment model, $(iv)$ parameter estimation from a cleaned weak lensing spectrum is possible with almost no bias and $(v)$ the misclassification would not strongly impact parameter estimation from the boosted weak lensing spectrum.
Statistical separation of weak gravitational lensing and intrinsic ellipticities based on galaxy colour information
Experiments have recently shown the feasibility of utilising bacteria as micro-scale robotic devices, with special attention paid to the development of bacteria-driven micro-swimmers taking advantage of built-in actuation and sensing mechanisms of cells. Here we propose a stochastic fluid dynamic model to describe analytically and computationally the dynamics of microscopic particles driven by the motion of surface-attached bacteria undergoing run-and-tumble motion. We compute analytical expressions for the rotational diffusion coefficient, the swimming speed and the effective diffusion coefficient. At short times, the mean squared displacement (MSD) is proportional to the square of the swimming speed, which is independent of the particle size (for fixed density of attached bacteria) and scales linearly with the number of attached bacteria; in contrast, at long times the MSD scales quadratically with the size of the swimmer and is independent of the number of bacteria. We then extend our result to the situation where the surface-attached bacteria undergo chemotaxis within the linear response regime. We demonstrate that bacteria-driven particles are capable of performing artificial chemotaxis, with a chemotactic drift velocity linear in the chemical concentration gradient and independent of the size of the particle. Our results are validated against numerical simulations in the Brownian dynamics limit and will be relevant to the optimal design of micro-swimmers for biomedical applications.
A stochastic model for bacteria-driven micro-swimmers
Unbalanced mobility and injection of charge carriers in metal-halide perovskite light-emitting devices pose severe limitations to the efficiency and response time of the electroluminescence. Modulation of gate bias in methylammonium lead iodide light-emitting transistors has proven effective to increase the brightness of light emission, up to MHz frequencies. In this work, we developed a new approach to improve charge carrier injection and enhance electroluminescence of perovskite light-emitting transistors by independent control of drain-source and gate-source bias voltages to compensate for space-charge effects. Optimization of bias pulse synchronization induces a fourfold enhancement of the emission intensity. Interestingly, the optimal phase delay between biasing pulses depends on modulation frequency due to the capacitive nature of the devices, which is well captured by numerical simulations of an equivalent electrical circuit. These results provide new insights into the electroluminescence dynamics of AC-driven perovskite light-emitting transistors and demonstrate an effective strategy to optimize device performance through independent control of amplitude, frequency, and phase of the biasing pulses.
Asynchronous Charge Carrier Injection in Perovskite Light-Emitting Transistors
It is obvious that we still have not any unified framework covering a zoo of interpretations of Quantum Mechanics, as well as satisfactory understanding of main ingredients of a phenomena like entanglement. The starting point is an idea to describe properly the key ingredient of the area, namely point/particle-like objects (physical quantum points/particles or, at least, structureless but quantum objects) and to change point (wave) functions by sheaves to the sheaf wave functions (Quantum Sheaves). In such an approach Quantum States are sections of the coherent sheaves or contravariant functors from the kinematical category describing space-time to other one, Quantum Dynamical Category, properly describing the complex dynamics of Quantum Patterns. The objects of this category are some filtrations on the functional realization of Hilbert space of Quantum States. In this Part 2, the sequel of Part 1, we present a family of methods which can describe important details of complex behaviour in quantum ensembles: the creation of nontrivial patterns, localized, chaotic, entangled or decoherent, from the fundamental basic localized (nonlinear) eigenmodes (in contrast with orthodox gaussian-like) in various collective models arising from the quantum hierarchies described by Wigner-like equations.
Quantum points/patterns, Part 2. From quantum points to quantum patterns via multiresolution
We simulate the hadroproduction of a t anti-t pair in association with a hard photon at LHC using the PowHel package. These events are almost fully inclusive with respect to the photon, allowing for any physically relevant isolation of the photon. We use the generated events, stored according to the Les-Houches event format, to make predictions for differential distributions formally at the next-to-leading order (NLO) accuracy and we compare these to existing predictions accurate at NLO using the smooth isolation prescription of Frixione. We also make predictions for distributions after full parton shower and hadronization using the standard experimental cone-isolation of the photon.
Hadroproduction of t anti-t pair in association with an isolated photon at NLO accuracy matched with parton shower
We propose a novel framework for 3D-aware object manipulation, called Auto-Encoding Neural Radiance Fields (AE-NeRF). Our model, which is formulated in an auto-encoder architecture, extracts disentangled 3D attributes such as 3D shape, appearance, and camera pose from an image, and a high-quality image is rendered from the attributes through disentangled generative Neural Radiance Fields (NeRF). To improve the disentanglement ability, we present two losses, global-local attribute consistency loss defined between input and output, and swapped-attribute classification loss. Since training such auto-encoding networks from scratch without ground-truth shape and appearance information is non-trivial, we present a stage-wise training scheme, which dramatically helps to boost the performance. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies.
AE-NeRF: Auto-Encoding Neural Radiance Fields for 3D-Aware Object Manipulation
High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexibility, general penalties, easy interpretation of results, and fast computation in high-dimensional settings. We also outline extensions of our methods leading to novel methods for Non-negative PLS and Generalized PLS, an adaption of PLS for structured data. We demonstrate the utility of our methods through simulations and a case study on proton Nuclear Magnetic Resonance (NMR) spectroscopy data.
Regularized Partial Least Squares with an Application to NMR Spectroscopy
We review the background, theory and general equations for the analysis of equilibrium protein unfolding experiments, focusing on denaturant and heat-induced unfolding. The primary focus is on the thermodynamics of reversible folding/unfolding transitions and the experimental methods that are available for extracting thermodynamic parameters. We highlight the importance of modelling both how the folding equilibrium depends on a perturbing variable such as temperature or denaturant concentration, and the importance of modelling the baselines in the experimental observables.
Linking thermodynamics and measurements of protein stability
We report observation of strong and anisotropic third harmonic generation (THG) in monolayer and multilayer ReS$_2$. The third-order nonlinear optical susceptibility of monolayer ReS$_2$, $\left| \chi^{(3)} \right|$ is on the order of $10^{-18} $ m$^2$/V$^2$, which is about one order of magnitude higher than reported results for hexagonal-lattice transition metal dichalcogenides such as MoS$_2$. A similar magnitude for the third-order nonlinear optical susceptibility was also obtained for a multilayer sample. The intensity of the THG field was found to be dependent on the direction of the incident light polarization for both monolayer and multilayer samples. A point group symmetry analysis shows that such anisotropy is not expected from a perfect $1T$ lattice, and must arise from the distortions in the ReS$_2$ lattice. Our results show that THG measurements can be used to characterize lattice distortions of two-dimensional materials, and that lattice distortions are important for the nonlinear optical properties of such materials.
Strong and Anisotropic Third Harmonic Generation in Monolayer and Multilayer ReS$_2$
In this paper we present strategies for mapping the dialog act annotations of the LEGO corpus into the communicative functions of the ISO 24617-2 standard. Using these strategies, we obtained an additional 347 dialogs annotated according to the standard. This is particularly important given the reduced amount of existing data in those conditions due to the recency of the standard. Furthermore, these are dialogs from a widely explored corpus for dialog related tasks. However, its dialog annotations have been neglected due to their high domain-dependency, which renders them unuseful outside the context of the corpus. Thus, through our mapping process, we both obtain more data annotated according to a recent standard and provide useful dialog act annotations for a widely explored corpus in the context of dialog research.
Mapping the Dialog Act Annotations of the LEGO Corpus into the Communicative Functions of ISO 24617-2
AI Hya has been known as an eclipsing binary with a monoperiodic $\delta$ Sct pulsator. We present the results from its {\it TESS} photometry observed during Sector 7. Including our five minimum epochs, the eclipse timing diagram displays the apsidal motion with a rate of $\dot{\omega}$ = 0.075$\pm$0.031 deg year$^{-1}$, which corresponds to an apsidal period of U = 4800$\pm$2000 years. The binary star model represents that the smaller, less massive primary component is 427 K hotter than the pulsating secondary, and our distance of 612$\pm$36 pc is in good agreement with the $Gaia$ distance of 644$\pm$26 pc. We subtracted the binary effects from the observed {\it TESS} data and applied a multifrequency analysis to these residuals. The result reveals that AI Hya is multiperiodic in its pulsation. Of 14 signals detected, four ($f_1$, $f_2$, $f_3$, $f_6$) may be considered independent pulsation frequencies. The period ratios of $P_{\rm pul}/P_{\rm orb}$ = 0.012$-$0.021 and the pulsation constants of $Q$ = 0.30$-$0.52 days correspond to $\delta$ Sct pulsations in binaries. We found that the secondary component of AI Hya pulsates in both radial fundamental $F$ modes ($f_2$ and $f_3$) and non-radial $g_1$ modes with a low degree of $\ell$ = 2 ($f_1$ and $f_6$).
TESS Photometry of the Eclipsing $\delta$ Scuti Star AI Hydrae
In this review, we present econometric and statistical methods for analyzing randomized experiments. For basic experiments we stress randomization-based inference as opposed to sampling-based inference. In randomization-based inference, uncertainty in estimates arises naturally from the random assignment of the treatments, rather than from hypothesized sampling from a large population. We show how this perspective relates to regression analyses for randomized experiments. We discuss the analyses of stratified, paired, and clustered randomized experiments, and we stress the general efficiency gains from stratification. We also discuss complications in randomized experiments such as non-compliance. In the presence of non-compliance we contrast intention-to-treat analyses with instrumental variables analyses allowing for general treatment effect heterogeneity. We consider in detail estimation and inference for heterogeneous treatment effects in settings with (possibly many) covariates. These methods allow researchers to explore heterogeneity by identifying subpopulations with different treatment effects while maintaining the ability to construct valid confidence intervals. We also discuss optimal assignment to treatment based on covariates in such settings. Finally, we discuss estimation and inference in experiments in settings with interactions between units, both in general network settings and in settings where the population is partitioned into groups with all interactions contained within these groups.
The Econometrics of Randomized Experiments
The computational power increases over the past decades havegreatly enhanced the ability to simulate chemical reactions andunderstand ever more complex transformations. Tensor contractions are the fundamental computational building block of these simulations. These simulations have often been tied to one platform and restricted in generality by the interface provided to the user. The expanding prevalence of accelerators and researcher demands necessitate a more general approach which is not tied to specific hardware or requires contortion of algorithms to specific hardware platforms. In this paper we present COMET, a domain-specific programming language and compiler infrastructure for tensor contractions targeting heterogeneous accelerators. We present a system of progressive lowering through multiple layers of abstraction and optimization that achieves up to 1.98X speedup for 30 tensor contractions commonly used in computational chemistry and beyond.
COMET: A Domain-Specific Compilation of High-Performance Computational Chemistry
In this thesis we investigate aspects of two problems. In the first part of this thesis, we concentrate on renormalization group methods in Hamiltonian framework. We show that the well-known coupled-cluster many-body theory techniques can be incorporated in the Wilsonian renormalization group to provide a very powerful framework for construction of effective Hamiltonian field theories. Eventhough the formulation is intrinsically non-perturbative, we have shown that a loop-expansion can be implemented. As illustrative examples, we apply our formulation on the $\Phi^{4}$ theory and an extended Lee model. The many-body problem in an extended Lee model is also studied. We show that a combination of the coupled-cluster theory and the Feshbach projection techniques leads to a renormalized generalized Brueckner theory. The second part of the thesis is rather phenomenologically orientated. In this part, we will employ an effective field-theoretical model as can be constructed by means of the techniques of the first part of the thesis, a quark-confining non-local NJL model and study the baryon and diquarks in this model. After truncation of the two-body channels to the scalar and axial-vector diquarks, a relativistic Faddeev equation for nucleon bound states is solved in the covariant diquark-quark picture. We study the possible implications of quark confinement for the description of the diquarks and the nucleon. We also examine alternative field theoretical approaches for describing baryons.
Topics in quantum field theory: Renormalization groups in Hamiltonian framework and baryon structure in a non-local QCD model
To measure spin-dependent parton distribution functions in the production of W bosons at the Relativistic Heavy Ion Collider, an accurate model for distributions of charged leptons from the W boson decay is needed. We present single-spin lepton-level cross sections of order $\alpha_S$ for this process, as well as resummed cross sections, which include multiple parton radiation effects. We also present a program RhicBos for the numerical analysis of single-spin and double-spin cross sections in the Drell-Yan process, W and Z boson production.
Resummation for single-spin asymmetries in W boson production
Extreme events provide relevant insights into the dynamics of climate and their understanding is key for mitigating the impact of climate variability and climate change. By applying large deviation theory to a state-of-the-art Earth system model, we define the climatology of persistent heatwaves and cold spells in key target geographical regions by estimating the rate functions for the surface temperature, and we assess the impact of increasing CO$_2$ concentration on such persistent anomalies. Hence, we can better quantify the increasing hazard {\color{black}due} to heatwaves in a warmer climate. We show that two 2010 high impact events - summer Russian heatwave and winter Dzud in Mongolia - are associated with atmospheric patterns that are exceptional compared to the typical ones, but typical compared to the climatology of extremes. Their dynamics is encoded in the natural variability of the climate. Finally, we propose and test an approximate formula for the return times of large and persistent temperature fluctuations from easily accessible statistical properties.
Fingerprinting Heatwaves and Cold Spells and Assessing Their Response to Climate Change using Large Deviation Theory
Discrete-time counterpart of thermodynamic uncertainty relation (conjectured in P. Pietzonka, et.al., arXiv:1702.07699 (2017)) with finite time interval is considered. We show that this relation do not hold by constructing a concrete counterexample to this. Our finding suggests that the proof of thermodynamic uncertainty relation with finite time interval, if true, should strongly rely on the fact that the time is continuous.
Finite-time thermodynamic uncertainty relation do not hold for discrete-time Markov process
Privacy preserving networks can be modelled as decentralized networks (e.g., sensors, connected objects, smartphones), where communication between nodes of the network is not controlled by an all-knowing, central node. For this type of networks, the main issue is to gather/learn global information on the network (e.g., by optimizing a global cost function) while keeping the (sensitive) information at each node. In this work, we focus on text information that agents do not want to share (e.g., text messages, emails, confidential reports). We use recent advances on decentralized optimization and topic models to infer topics from a graph with limited communication. We propose a method to adapt latent Dirichlet allocation (LDA) model to decentralized optimization and show on synthetic data that we still recover similar parameters and similar performance at each node than with stochastic methods accessing to the whole information in the graph.
Decentralized Topic Modelling with Latent Dirichlet Allocation
The Sylvester equation $AX-XB=C$ is considered in the setting of quaternion matrices. Conditions that are necessary and sufficient for the existence of a unique solution are well-known. We study the complementary case where the equation either has infinitely many solutions or does not have solutions at all. Special attention is given to the case where $A$ and $B$ are respectively, lower and upper triangular two-diagonal matrices (in particular, if $A$ and $B$ are Jordan blocks)
On the Sylvester matrix equation over quaternions
We introduce a new multiparty cryptographic protocol, which we call `entanglement sharing schemes', wherein a dealer retains half of a maximally-entangled bipartite state and encodes the other half into a multipartite state that is distributed among multiple players. In a close analogue to quantum secret sharing, some subsets of players can recover maximal entanglement with the dealer whereas other subsets can recover no entanglement (though they may retain classical correlations with the dealer). We find a lower bound on the share size for such schemes and construct two non-trivial examples based on Shor's $[[9,1,3]]$ and the $[[4,2,2]]$ stabilizer code; we further demonstrate how other examples may be obtained from quantum error correcting codes through classical encryption. Finally, we demonstrate that entanglement sharing schemes can be applied to characterize leaked information in quantum ramp secret sharing.
Entanglement Sharing Protocol via Quantum Error Correcting Codes
To date, self-driving experimental wheelchair technologies have been either inexpensive or robust, but not both. Yet, in order to achieve real-world acceptance, both qualities are fundamentally essential. We present a unique approach to achieve inexpensive and robust autonomous and semi-autonomous assistive navigation for existing fielded wheelchairs, of which there are approximately 5 million units in Canada and United States alone. Our prototype wheelchair platform is capable of localization and mapping, as well as robust obstacle avoidance, using only a commodity RGB-D sensor and wheel odometry. As a specific example of the navigation capabilities, we focus on the single most common navigation problem: the traversal of narrow doorways in arbitrary environments. The software we have developed is generalizable to corridor following, desk docking, and other navigation tasks that are either extremely difficult or impossible for people with upper-body mobility impairments.
Cheap or Robust? The Practical Realization of Self-Driving Wheelchair Technology
We present the results of an investigation of the dredge-up and mixing during the merger of two white dwarfs with different chemical compositions by conducting hydrodynamic simulations of binary mergers for three representative mass ratios. In all the simulations, the total mass of the two white dwarfs is $\lesssim1.0~{\rm M_\odot}$. Mergers involving a CO and a He white dwarf have been suggested as a possible formation channel for R Coronae Borealis type stars, and we are interested in testing if such mergers lead to conditions and outcomes in agreement with observations. Even if the conditions during the merger and subsequent nucleosynthesis favor the production of $^{18}{\mathrm O}$, the merger must avoid dredging up large amounts of $^{16}{\mathrm O}$, or else it will be difficult to produce sufficient $^{18}{\mathrm O}$ to explain the oxygen ratio observed to be of order unity. We performed a total of 9 simulations using two different grid-based hydrodynamics codes using fixed and adaptive meshes, and one smooth particle hydrodynamics (SPH) code. We find that in most of the simulations, $>10^{-2}~{\rm M_\odot}$ of $^{16}{\mathrm O}$ is indeed dredged up during the merger. However, in SPH simulations where the accretor is a hybrid He/CO white dwarf with a $\sim 0.1~{\rm M_\odot}$ layer of helium on top, we find that no $^{16}{\mathrm O}$ is being dredged up, while in the $q=0.8$ simulation $<10^{-4}~{\rm M_\odot}$ of $^{16}{\mathrm O}$ has been brought up, making a WD binary consisting of a hybrid CO/He WD and a companion He WD an excellent candidate for the progenitor of RCB stars.
The role of dredge-up in double white dwarf mergers
Previously, we have identified the cytoplasmic zinc metalloprotease insulin-degrading enzyme(IDE) in human tissues by an immunohistochemical method involving no antigen retrieval (AR) by pressure cooking to avoid artifacts by endogenous biotin exposure and a detection kit based on the labeled streptavidin biotin (LSAB) method. Thereby, we also employed 3% hydrogen peroxide(H2O2) for the inhibition of endogenous peroxidase activity and incubated the tissue sections with the biotinylated secondary antibody at room temperature (RT). We now add the immunohistochemical details that had led us to this optimized procedure as they also bear a more general relevance when demonstrating intracellular tissue antigens. Our most important result is that endogenous peroxidase inhibition by 0.3% H2O2 coincided with an apparently positive IDE staining in an investigated breast cancer specimen whereas combining a block by 3% H2O2 with an incubation of the biotinylated secondary antibody at RT, yet not at 37 degrees Celsius, revealed this specimen as almost entirely IDE-negative. Our present data caution against three different immunohistochemical pitfalls that might cause falsely positive results and artifacts when using an LSAB- and peroxidase-based detection method: pressure cooking for AR, insufficient quenching of endogenous peroxidases and heating of tissue sections while incubating with biotinylated secondary antibodies.
Immunohistochemical pitfalls in the demonstration of insulin-degrading enzyme in normal and neoplastic human tissues
In this paper we present the results of a search for members of the globular cluster Palomar 5 and its associated tidal tails. The analysis has been performed using intermediate and low resolution spectroscopy with the AAOmega spectrograph on the Anglo-Australian Telescope. Based on kinematics, line strength and photometric information, we identify 39 new red giant branch stars along $\sim$20$^{\circ}$ of the tails, a larger angular extent than has been previously studied. We also recover eight previously known tidal tail members. Within the cluster, we find seven new red giant and one blue horizontal branch members and confirm a further twelve known red giant members. In total, we provide velocity data for 67 stars in the cluster and the tidal tails. Using a maximum likelihood technique, we derive a radial velocity for Pal 5 of $-57.4 \pm 0.3$ km s$^{-1}$ and a velocity dispersion of $1.2\pm0.3$ km s$^{-1}$. We confirm and extend the linear velocity gradient along the tails of $1.0 \pm 0.1$ km s$^{-1}$ deg$^{-1}$, with an associated intrinsic velocity dispersion of $2.1\pm0.4$ km s$^{-1}$. Neither the velocity gradient nor the dispersion change in any significant way with angular distance from the cluster, although there is some indication that the gradient may be smaller at greater angular distances in the trailing tail. Our results verify the tails as kinematically cold structures and will allow further constraints to be placed on the orbit of Pal 5, ultimately permitting a greater understanding of the shape and extent of the Galaxy's dark matter halo.
Palomar 5 and its Tidal Tails: A Search for New Members in the Tidal Stream
Learning transferable representation of knowledge graphs (KGs) is challenging due to the heterogeneous, multi-relational nature of graph structures. Inspired by Transformer-based pretrained language models' success on learning transferable representation for texts, we introduce a novel inductive KG representation model (iHT) for KG completion by large-scale pre-training. iHT consists of a entity encoder (e.g., BERT) and a neighbor-aware relational scoring function both parameterized by Transformers. We first pre-train iHT on a large KG dataset, Wikidata5M. Our approach achieves new state-of-the-art results on matched evaluations, with a relative improvement of more than 25% in mean reciprocal rank over previous SOTA models. When further fine-tuned on smaller KGs with either entity and relational shifts, pre-trained iHT representations are shown to be transferable, significantly improving the performance on FB15K-237 and WN18RR.
Pre-training Transformers for Knowledge Graph Completion
In this paper, we first prove a folklore conjecture on a greatest lower bound of the Calabi energy in all K\"ahler manifold. Similar result in algebriac setting was obtained by S. K. Donaldson. Secondly, we give an upper/lower bound estimate of the K energy in terms of the geodesic distance and the Calabi energy. This is used to prove a theorem on convergence of K\"ahler metrics in holomorphic coordinates, with uniform bound on the Ricci curvature and the diameter. Thirdly, we set up a framework for the existence of geodesic rays when an asymptotic direction is given. I
Space of K\"ahler metrics III--On the lower bound of the Calabi energy and geodesic distance
A cross-layer design along with an optimal resource allocation framework is formulated for wireless fading networks, where the nodes are allowed to perform network coding. The aim is to jointly optimize end-to-end transport layer rates, network code design variables, broadcast link flows, link capacities, average power consumption, and short-term power allocation policies. As in the routing paradigm where nodes simply forward packets, the cross-layer optimization problem with network coding is non-convex in general. It is proved however, that with network coding, dual decomposition for multicast is optimal so long as the fading at each wireless link is a continuous random variable. This lends itself to provably convergent subgradient algorithms, which not only admit a layered-architecture interpretation but also optimally integrate network coding in the protocol stack. The dual algorithm is also paired with a scheme that yields near-optimal network design variables, namely multicast end-to-end rates, network code design quantities, flows over the broadcast links, link capacities, and average power consumption. Finally, an asynchronous subgradient method is developed, whereby the dual updates at the physical layer can be affordably performed with a certain delay with respect to the resource allocation tasks in upper layers. This attractive feature is motivated by the complexity of the physical layer subproblem, and is an adaptation of the subgradient method suitable for network control.
Cross-Layer Designs in Coded Wireless Fading Networks with Multicast
We report the results of a multi-band observing campaign on the famous blazar 3C 279 conducted during a phase of increased activity from 2013 December to 2014 April, including first observations of it with NuSTAR. The $\gamma$-ray emission of the source measured by Fermi-LAT showed multiple distinct flares reaching the highest flux level measured in this object since the beginning of the Fermi mission, with $F(E > 100\,{\rm MeV})$ of $10^{-5}$ photons cm$^{-2}$ s$^{-1}$, and with a flux doubling time scale as short as 2 hours. The $\gamma$-ray spectrum during one of the flares was very hard, with an index of $\Gamma_\gamma = 1.7 \pm 0.1$, which is rarely seen in flat spectrum radio quasars. The lack of concurrent optical variability implies a very high Compton dominance parameter $L_\gamma/L_{\rm syn} > 300$. Two 1-day NuSTAR observations with accompanying Swift pointings were separated by 2 weeks, probing different levels of source activity. While the 0.5$-$70 keV X-ray spectrum obtained during the first pointing, and fitted jointly with Swift-XRT is well-described by a simple power law, the second joint observation showed an unusual spectral structure: the spectrum softens by $\Delta\Gamma_{\rm X} \simeq 0.4$ at $\sim$4 keV. Modeling the broad-band SED during this flare with the standard synchrotron plus inverse Compton model requires: (1) the location of the $\gamma$-ray emitting region is comparable with the broad line region radius, (2) a very hard electron energy distribution index $p \simeq 1$, (3) total jet power significantly exceeding the accretion disk luminosity $L_{\rm j}/L_{\rm d} \gtrsim 10$, and (4) extremely low jet magnetization with $L_{\rm B}/L_{\rm j} \lesssim 10^{-4}$. We also find that single-zone models that match the observed $\gamma$-ray and optical spectra cannot satisfactorily explain the production of X-ray emission.
Rapid Variability of Blazar 3C 279 during Flaring States in 2013-2014 with Joint Fermi-LAT, NuSTAR, Swift, and Ground-Based Multi-wavelength Observations
Orthogonality relations for conical or Mehler functions of imaginary order are derived and expressed in terms of the Dirac delta function. This work extends recently derived orthogonality relations of associated Legendre functions.
Orthogonality relations for conical functions of imaginary order
Kondo insulators have recently aroused great interest because they are promising materials that host a topological insulator state caused by the strong electron interactions. Moreover, recent observations of the quantum oscillations in the insulating state of Kondo insulators have come as a great surprise. Here, to investigate the surface electronic state of a prototype Kondo insulator YbB$_{12}$, we measured transport properties of single crystals and microstructures. In all samples, the temperature dependence of the electrical resistivity is insulating at high temperatures and the resistivity exhibits a plateau at low temperatures. The magnitude of the plateau value decreases with reducing sample thickness, which is quantitatively consistent with the surface electronic conduction in the bulk insulating YbB$_{12}$. Moreover, the magnetoresistance of the microstructures exhibits a weak-antilocalization effect at low field. These results are consistent with the presence of topologically protected surface state, suggesting that YbB$_{12}$ is a candidate material of the topological Kondo insulator. The high field resistivity measurements up to $\mu_0H$ = 50 T of the microstructures provide supporting evidence that the quantum oscillations of the resistivity in YbB$_{12}$ occurs in the insulating bulk.
Topological surface conduction in Kondo insulator YbB$_{12}$
PDS 70 is a $\sim$5 Myr old star with a gas and dust disc in which several proto-planets have been discovered. We present the first UV detection of the system along with X-ray observations taken with the \textit{Neil Gehrels Swift Observatory} satellite. PDS 70 has an X-ray flux of 3.4$\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ in the 0.3-10.0 keV range, and UV flux (U band) of 3.5$\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ . At the distance of 113.4 pc determined from Gaia DR2 this gives luminosities of 5.2$\times 10^{29}$ erg s$^{-1}$ and 5.4$\times 10^{29}$ erg s$^{-1}$ respectively. The X-ray luminosity is consistent with coronal emission from a rapidly rotating star close to the log $\frac{L_{\mathrm{X}}}{L_{\mathrm{bol}}} \sim -3$ saturation limit. We find the UV luminosity is much lower than would be expected if the star were still accreting disc material and suggest that the observed UV emission is coronal in origin.
A Swift view of X-ray and UV radiation in the planet-forming T-Tauri system PDS 70
We derive the equations of motion for a system undergoing boost-invariant longitudinal and azimuthally-symmetric transverse "Gubser flow" using leading-order anisotropic hydrodynamics. This is accomplished by assuming that the one-particle distribution function is ellipsoidally-symmetric in the momenta conjugate to the de Sitter coordinates used to parameterize the Gubser flow. We then demonstrate that the SO(3)_q symmetry in de Sitter space further constrains the anisotropy tensor to be of spheroidal form. The resulting system of two coupled ordinary differential equations for the de Sitter-space momentum scale and anisotropy parameter are solved numerically and compared to a recently obtained exact solution of the relaxation-time-approximation Boltzmann equation subject to the same flow. We show that anisotropic hydrodynamics describes the spatio-temporal evolution of the system better than all currently known dissipative hydrodynamics approaches. In addition, we prove that anisotropic hydrodynamics gives the exact solution of the relaxation-time approximation Boltzmann equation in the ideal, eta/s -> 0, and free-streaming, eta/s -> infinity, limits.
Anisotropic hydrodynamics for conformal Gubser flow
We study global fluctuations of the guanine and cytosine base content (GC%) in mouse genomic DNA using spectral analyses. Power spectra S(f) of GC% fluctuations in all nineteen autosomal and two sex chromosomes are observed to have the universal functional form S(f) \sim 1/f^alpha (alpha \approx 1) over several orders of magnitude in the frequency range 10^-7< f < 10^-5 cycle/base, corresponding to long-ranging GC% correlations at distances between 100 kb and 10 Mb. S(f) for higher frequencies (f > 10^-5 cycle/base) shows a flattened power-law function with alpha < 1 across all twenty-one chromosomes. The substitution of about 38% interspersed repeats does not affect the functional form of S(f), indicating that these are not predominantly responsible for the long-ranged multi-scale GC% fluctuations in mammalian genomes. Several biological implications of the large-scale GC% fluctuation are discussed, including neutral evolutionary history by DNA duplication, chromosomal bands, spatial distribution of transcription units (genes), replication timing, and recombination hot spots.
Spectral Analysis of Guanine and Cytosine Fluctuations of Mouse Genomic DNA
A new theoretical approach for the calculation of optical properties of complex solutions is proposed. It is based on a dielectric matrix with included small metallic inclusions (less than 3 nm) of spherical shape. We take into account the mutual interactions between the inclusions and the quantum finite-size effects. On the basis of the effective medium model, TDLDA and Kohn-Sham theories, some analytical expressions for the effective dielectric permittivity of the solution are obtained.
Size-dependent effects in solutions of small metal nanoparticles
Goerss, Henn, Mahowald and Rezk construct a complex of permutation modules for the Morava stabilizer group G_2 at the prime 3. We describe how this can be done using techniques from homological algebra.
On the construction of permutation complexes for profinite groups
We consider the problem of identifying whether findings replicate from one study of high dimension to another, when the primary study guides the selection of hypotheses to be examined in the follow-up study as well as when there is no division of roles into the primary and the follow-up study. We show that existing meta-analysis methods are not appropriate for this problem, and suggest novel methods instead. We prove that our multiple testing procedures control for appropriate error-rates. The suggested FWER controlling procedure is valid for arbitrary dependence among the test statistics within each study. A more powerful procedure is suggested for FDR control. We prove that this procedure controls the FDR if the test statistics are independent within the primary study, and independent or have dependence of type PRDS in the follow-up study. For arbitrary dependence within the primary study, and either arbitrary dependence or dependence of type PRDS in the follow-up study, simple conservative modifications of the procedure control the FDR. We demonstrate the usefulness of these procedures via simulations and real data examples.
Discovering findings that replicate from a primary study of high dimension to a follow-up study
A concept of the Moufang-Malt'tsev pair is elaborated. This concept is based on the generalized Maurer-Cartan equations of a local analytic Moufang loop. Triality can be seen as a fundamental property of such pairs. Based on triality, the Yamagutian is constructed. Properties of the Yamagutian are studied.
Moufang symmetry II. Moufang-Mal'tsev pairs and triality
Observations of the Spitzer extragalactic First Look Survey field taken at 610 MHz with the Giant Metrewave Radio Telescope are presented. Seven individual pointings were observed, covering an area of 4 square degrees with a resolution of 5.8'' x 4.7'', PA 60 deg. The r.m.s. noise at the centre of the pointings is between 27 and 30 microJy before correction for the GMRT primary beam. The techniques used for data reduction and production of a mosaicked image of the region are described, and the final mosaic, along with a catalogue of 3944 sources detected above 5 sigma, are presented. The survey complements existing radio and infrared data available for this region.
Deep 610-MHz GMRT observations of the Spitzer extragalactic First Look Survey field - I. Observations, data analysis and source catalogue
Connection between an intrinsic breach of symmetry of equilibrium motion and violation of the second law is accentuated. An intrinsic breach only of clockwise - counter-clockwise symmetry of a circular equilibrium motion can be logical under equilibrium conditions, whereas a breach of right-left symmetry should be always an actual violation of the second law. The reader's attention is drawn to experimental evidence of an intrinsic breach of the clockwise - counter-clockwise symmetry of a circular equilibrium motion, well known as the persistent current. The persistent current is observed in mesoscopic normal metal, semiconductor and superconductor loops and the clockwise - counter-clockwise symmetry is broken because of the discrete spectrum of the permitted states of quantum charged particles in a closed loop. The quantum oscillations of the dc voltage observed on a segment of an asymmetric superconducting loop is experimental evidence of the intrinsic breach of the right-left symmetry and an actual violation of the second law.
Quantum limits to the second law and breach of symmetry
A detector material or configuration that can provide an unambiguous indication of neutron capture can substantially reduce random coincidence backgrounds in antineutrino detection and capture-gated neutron spectrometry applications. Here we investigate the performance of such a material, a composite of plastic scintillator and $^6$Li$_6^{nat}$Gd$(^{10}$BO$_{3})_{3}$:Ce (LGB) crystal shards of ~1 mm dimension and comprising 1% of the detector by mass. While it is found that the optical propagation properties of this material as currently fabricated are only marginally acceptable for antineutrino detection, its neutron capture identification ability is encouraging.
Investigation of Large LGB Detectors for Antineutrino Detection
Dynamic optimization problems have gained significant attention in evolutionary computation as evolutionary algorithms (EAs) can easily adapt to changing environments. We show that EAs can solve the graph coloring problem for bipartite graphs more efficiently by using dynamic optimization. In our approach the graph instance is given incrementally such that the EA can reoptimize its coloring when a new edge introduces a conflict. We show that, when edges are inserted in a way that preserves graph connectivity, Randomized Local Search (RLS) efficiently finds a proper 2-coloring for all bipartite graphs. This includes graphs for which RLS and other EAs need exponential expected time in a static optimization scenario. We investigate different ways of building up the graph by popular graph traversals such as breadth-first-search and depth-first-search and analyse the resulting runtime behavior. We further show that offspring populations (e. g. a (1+$\lambda$) RLS) lead to an exponential speedup in $\lambda$. Finally, an island model using 3 islands succeeds in an optimal time of $\Theta(m)$ on every $m$-edge bipartite graph, outperforming offspring populations. This is the first example where an island model guarantees a speedup that is not bounded in the number of islands.
More Effective Randomized Search Heuristics for Graph Coloring Through Dynamic Optimization
We investigate cosmic string networks in the Abelian Higgs model using data from a campaign of large-scale numerical simulations on lattices of up to $4096^3$ grid points. We observe scaling or self-similarity of the networks over a wide range of scales, and estimate the asymptotic values of the mean string separation in horizon length units $\dot{\xi}$ and of the mean square string velocity $\bar v^2$ in the continuum and large time limits. The scaling occurs because the strings lose energy into classical radiation of the scalar and gauge fields of the Abelian Higgs model. We quantify the energy loss with a dimensionless radiative efficiency parameter, and show that it does not vary significantly with lattice spacing or string separation. This implies that the radiative energy loss underlying the scaling behaviour is not a lattice artefact, and justifies the extrapolation of measured network properties to large times for computations of cosmological perturbations. We also show that the core growth method, which increases the defect core width with time to extend the dynamic range of simulations, does not introduce significant systematic error. We compare $\dot{\xi}$ and $\bar v^2$ to values measured in simulations using the Nambu-Goto approximation, finding that the latter underestimate the mean string separation by about 25%, and overestimate $\bar v^2$ by about 10%. The scaling of the string separation implies that string loops decay by the emission of massive radiation within a Hubble time in field theory simulations, in contrast to the Nambu-Goto scenario which neglects this energy loss mechanism. String loops surviving for only one Hubble time emit much less gravitational radiation than in the Nambu-Goto scenario, and are consequently subject to much weaker gravitational wave constraints on their tension.
Scaling from gauge and scalar radiation in Abelian Higgs string networks
We report on strong cooling and orientational control of all translational and angular degrees of freedom of a nanoparticle levitated in an optical trap in high vacuum. The motional cooling and control of all six degrees of freedom of a nanoparticle levitated by an optical tweezer is accomplished using coherent elliptic scattering within a high finesse optical cavity. Translational temperatures in the 100 $\mu$K range were reached while temperatures as low as 5 mK were attained in the librational degrees of freedom. This work represents an important milestone in controlling all observable degrees of freedom of a levitated particle and opens up future applications in quantum science and the study of single isolated nanoparticles.
Simultaneous cooling of all six degrees of freedom of an optically levitated nanoparticle by elliptic coherent scattering
When does Internet traffic cross international borders? This question has major geopolitical, legal and social implications and is surprisingly difficult to answer. A critical stumbling block is a dearth of tools that accurately map routers traversed by Internet traffic to the countries in which they are located. This paper presents Passport: a new approach for efficient, accurate country-level router geolocation and a system that implements it. Passport provides location predictions with limited active measurements, using machine learning to combine information from IP geolocation databases, router hostnames, whois records, and ping measurements. We show that Passport substantially outperforms existing techniques, and identify cases where paths traverse countries with implications for security, privacy, and performance.
Passport: Enabling Accurate Country-Level Router Geolocation using Inaccurate Sources
We show that a well-known result on solutions of the Maurer--Cartan equation extends to arbitrary (inhomogeneous) odd forms: any such form with values in a Lie superalgebra satisfying $d\o+\o^2=0$ is gauge-equivalent to a constant, $$\o=gCg^{-1}-dg\,g^{-1}\,.$$ This follows from a non-Abelian version of a chain homotopy formula making use of multiplicative integrals. An application to Lie algebroids and their non-linear analogs is given. Constructions presented here generalize to an abstract setting of differential Lie superalgebras where we arrive at the statement that odd elements (not necessarily satisfying the Maurer--Cartan equation) are homotopic\,---\,in a certain particular sense\,---\,if and only if they are gauge-equivalent.
On a non-Abelian Poincar\'e lemma
This paper deals with the partial solution of the energy eigenvalue problem for generalized symmetric quartic oscillators. Algebraization of the problem is achieved by expressing the Schroedinger operator in terms of the generators of a nilpotent group, which we call the quartic group. Energy eigenvalues are then seen to depend on the values of the two Casimir operators of the group. This dependence exhibits a scaling law which follows from the scaling properties of the group generators. Demanding that the potential gives rise to polynomial solutions in a particular Lie algebra element puts constraints on the four potential parameters, leaving only two of them free. For potentials satisfying such constraints at least one of the energy eigenvalues and the corresponding eigenfunctions can be obtained in closed analytic form {by pure algebraic means. With our approach we extend the class of quasi-exactly solvable quartic oscillators which have been obtained in the literature by means of the more common sl(2,R) algebraization. Finally we show, how solutions of the generalized quartic oscillator problem give rise to solutions for a charged particle moving in particular non-constant electromagnetic fields.
Polynomial Solutions of Generalized Quartic Anharmonic Oscillators
One of the great challenges of QCD is trying to understand the origin of the nucleon spin. Several decades of experimental measurements have shown that our current understanding is incomplete if only the quark and gluon spin contribution is considered. Over the last few years it has become increasingly clear that the contribution from the orbital angular momentum of the quarks and gluons has to be included as well. For instance, the sea quark orbital contribution remains largely unexplored. Measurements accessing the sea quark Sivers distribution will provide a probe of the sea quark orbital contribution. The upcoming E1039 experiment at Fermilab will access this distribution via the Drell-Yan process using a 120 GeV unpolarized proton beam directed on a polarized proton target. At E1039 kinematics the $u$-$\bar{u}$ annihilation process dominates the Drell-Yan cross section ($x_{Target}$ = 0.1 $\sim$ 0.35). If the $\bar{u}$ quark carries zero net angular momentum, then the measured Drell-Yan single-spin asymmetry should be zero, and vice versa. This experiment is a continuation of the currently running SeaQuest experiment.
A Future Polarized Drell-Yan Experiment at Fermilab
Let $\Pi_n^d$ denote the space of all spherical polynomials of degree at most $n$ on the unit sphere $\sph$ of $\mathbb{R}^{d+1}$, and let $d(x, y)$ denote the usual geodesic distance $\arccos x\cdot y$ between $x, y\in \sph$. Given a spherical cap $$ B(e,\al)=\{x\in\sph: d(x, e) \leq \al\}, (e\in\sph, \text{$\al\in (0,\pi)$ is bounded away from $\pi$}),$$ we define the metric $$\rho(x,y):=\frac 1{\al} \sqrt{(d(x, y))^2+\al(\sqrt{\al-d(x, e)}-\sqrt{\al-d(y,e)})^2}, $$ where $x, y\in B(e,\al)$. It is shown that given any $\be\ge 1$, $1\leq p<\infty$ and any finite subset $\Ld$ of $B(e,\al)$ satisfying the condition $\dmin_{\sub{\xi,\eta \in \Ld \xi\neq \eta}} \rho (\xi,\eta) \ge \f \da n$ with $\da\in (0,1]$, there exists a positive constant $C$, independent of $\al$, $n$, $\Ld$ and $\da$, such that, for any $f\in\Pi_{n}^d$, \begin{equation*} \sum_{\og\in \Ld} (\max_{x,y\in B_\rho (\og, \be\da/n)}|f(x)-f(y)|^p) |B_\rho(\og, \da/n)| \le (C \dz)^p \int_{B(e,\al)} |f(x)|^p d\sa(x),\end{equation*} where $d\sa(x)$ denotes the usual Lebesgue measure on $\sph$, $$B_\rho(x, r)=\Bl\{y\in B(e,\al): \rho(y,x)\leq r\Br\}, (r>0),$$ and $$\Bl|B_\rho(x, \f\da n)\Br|=\int_{B_{\rho}(x, \da/n)} d\sa(y) \sim \al ^{d}\Bl[ (\f{\da}n)^{d+1}+ (\f\da n)^{d} \sqrt{1-\f{d(x, e)}\al}\Br].$$ As a consequence, we establish positive cubature formulas and Marcinkiewicz-Zygmund inequalities on the spherical cap $B(e,\al)$.
Positive Cubature formulas and Marcinkiewicz-Zygmund inequalities on spherical caps
In this paper, we propose a supervised dictionary learning algorithm that aims to preserve the local geometry in both dimensions of the data. A graph-based regularization explicitly takes into account the local manifold structure of the data points. A second graph regularization gives similar treatment to the feature domain and helps in learning a more robust dictionary. Both graphs can be constructed from the training data or learned and adapted along the dictionary learning process. The combination of these two terms promotes the discriminative power of the learned sparse representations and leads to improved classification accuracy. The proposed method was evaluated on several different datasets, representing both single-label and multi-label classification problems, and demonstrated better performance compared with other dictionary based approaches.
Structure-Aware Classification using Supervised Dictionary Learning
A probabilistic representation for a class of weighted $p$-radial distributions, based on mixtures of a weighted cone probability measure and a weighted uniform distribution on the Euclidean $\ell_p^n$-ball, is derived. Large deviation principles for the empirical measure of the coordinates of random vectors on the $\ell_p^n$-ball with distribution from this weighted measure class are discussed. The class of $p$-radial distributions is extended to $p$-balls in classical matrix spaces, both for self-adjoint and non-self-adjoint matrices. The eigenvalue distribution of a self-adjoint random matrix, chosen in the matrix $p$-ball according to such a distribution, is determined. Similarly, the singular value distribution is identified in the non-self-adjoint case. Again, large deviation principles for the empirical spectral measures for the eigenvalues and the singular values are presented as an application.
Weighted $p$-radial Distributions on Euclidean and Matrix $p$-balls with Applications to Large Deviations
Despite the fact that the Schwarzschild and Kerr solutions for the Einstein equations, when written in standard Schwarzschild and Boyer-Lindquist coordinates, present coordinate singularities, all numerical studies of accretion flows onto collapsed objects have been widely using them over the years. This approach introduces conceptual and practical complications in places where a smooth solution should be guaranteed, i.e., at the gravitational radius. In the present paper, we propose an alternative way of solving the general relativistic hydrodynamic equations in background (fixed) black hole spacetimes. We identify classes of coordinates in which the (possibly rotating) black hole metric is free of coordinate singularities at the horizon, independent of time, and admits a spacelike decomposition. In the spherically symmetric, non-rotating case, we re-derive exact solutions for dust and perfect fluid accretion in Eddington-Finkelstein coordinates, and compare with numerical hydrodynamic integrations. We perform representative axisymmetric computations. These demonstrations suggest that the use of those coordinate systems carries significant improvements over the standard approach, especially for higher dimensional studies.
Relativistic Hydrodynamics around Black Holes and Horizon Adapted Coordinate Systems
We derive inflation from M-theory on S^1/Z_2 via the non-perturbative dynamics of N M5-branes. The open membrane instanton interactions between the M5-branes give rise to exponential potentials which are too steep for inflation individually but lead to inflation when combined together. The resulting type of inflation, known as assisted inflation, facilitates considerably the requirement of having all moduli, except the inflaton, stabilized at the beginning of inflation. During inflation the distances between the M5-branes, which correspond to the inflatons, grow until they reach the size of the S^1/Z_2 orbifold. At this stage the M5-branes will reheat the universe by dissolving into the boundaries through small instanton transitions. Further flux and non-perturbative contributions become important at this late stage, bringing inflation to an end and stabilizing the moduli. We find that with moderate values for N, one obtains both a sufficient amount of e-foldings and the right size for the spectral index.
M-Theory Inflation from Multi M5-Brane Dynamics
We address the problem of ambiguity of a function determined by an asymptotic perturbation expansion. Using a modified form of the Watson lemma recently proved elsewhere, we discuss a large class of functions determined by the same asymptotic power expansion and represented by various forms of integrals of the Laplace-Borel type along a general contour in the Borel complex plane. Some remarks on possible applications in QCD are made.
Asymptotic power series of field correlators
We present a unified view of orientational ordering in phases I, II, and III of solid hydrogen. Phases II and III are orientationally ordered, while the ordering objects in phase II are angular momenta of rotating molecules, and in phase III the molecules themselves. This concept provides quantitative explanation of the vibron softening, libron and roton spectra, and increase of the IR vibron oscillator strength in phase III. The temperature dependence of the effective charge parallels the frequency shifts of the IR and Raman vibrons. All three quantities are linear in the order parameter.
Quantum and Classical Orientational Ordering in Solid Hydrogen
The isospin breaking and radiative decay widths of the positive-parity charm-strange mesons, $D^{*}_{s0}$ and $D_{s1}$, and their predicted bottom-strange counterparts, $B^{*}_{s0}$ and $B_{s1}$, as hadronic molecules are revisited. This is necessary, since the $B^{*}_{s0}$ and $B_{s1}$ masses used in Eur. Phys. J. A 50 (2014) 149 were too small, in conflict with the heavy quark flavour symmetry. Furthermore, not all isospin breaking contributions were considered. We here present a method to restore heavy quark flavour symmetry, correcting the masses of $B^{*}_{s0}$ and $B_{s1}$, and include the complete isospin breaking contributions up to next-to-leading order. With this we provide updated hadronic decay widths for all of $D^{*}_{s0}$, $D_{s1}$, $B^{*}_{s0}$ and $B_{s1}$. Results for the partial widths of the radiative deays of $D_{s0}^*(2317)$ and $D_{s1}(2460)$ are also renewed in light of the much more precisely measured $D^{*+}$ width. We find that $B_s\pi^0$ and $B_s\gamma$ are the preferred channels for searching for $B_{s0}^*$ and $B_{s1}$, respectively.
Update on strong and radiative decays of the $D_{s0}^{*}(2317)$ and $D_{s1}(2460)$ and their bottom cousins
In this paper we present a unified treatment for the ordinary differential equations under the Osgood and Sobolev type conditions, following Crippa and de Lellis's direct method. More precisely, we prove the existence, uniqueness and regularity of the DiPerna-Lions flow generated by a vector field which is "almost everywhere Osgood continuous".
A unified treatment of ODEs under Osgood and Sobolev type conditions
Schr\"odinger operators often display singularities at the origin, the Coulomb problem in atomic physics or the various matter coupling terms in the Friedmann-Robertson-Walker problem being prominent examples. For various applications it would be desirable to have at one's disposal an explicit basis spanning a dense and invariant domain for such types of Schr\"odinger operators, for instance stationary perturbation theory or the Raleigh-Ritz method. Here we make the observation, that not only a such basis can indeed be provided but that in addition relevant matrix elements and inner products can be computed analytically in closed form, thus providing the required data e.g. for an analytical Gram-Schmid orthonormalisation.
Properties of a smooth, dense, invariant domain for singular potential Schroedinger operators
The extraction of the strange quark parton distribution function (PDF) poses a long-standing puzzle. Measurements from neutrino-nucleus deep inelastic scattering (DIS) experiments suggest the strange quark is suppressed compared to the light sea quarks, while recent studies of W/Z boson production at the LHC imply a larger strange component at small x values. As the parton flavor determination in the proton depends on nuclear corrections, e.g. from heavy-target DIS, LHC heavy ion measurements can provide a distinct perspective to help clarify this situation. In this investigation we extend the nCTEQ15 nPDFs to study the impact of the LHC proton-lead W/Z production data on both the flavor differentiation and nuclear corrections. This complementary data set provides new insights on both the LHC W/Z proton analyses and the neutrino-nucleus DIS data. We identify these new nPDFs as nCTEQ15WZ. Our calculations are performed using a new implementation of the nCTEQ code (nCTEQ++) based on C++ which enables us to easily interface to external programs such as HOPPET, APPLgrid and MCFM. Our results indicate that, as suggested by the proton data, the small x nuclear strange sea appears larger than previously expected, even when the normalization of the W/Z data is accommodated in the fit. Extending the nCTEQ15 analysis to include LHC W/Z data represents an important step as we advance toward the next generation of nPDFs.
Impact of LHC vector boson production in heavy ion collisions on strange PDFs
For three decades binary decision diagrams, a data structure efficiently representing Boolean functions, have been widely used in many distinct contexts like model verification, machine learning, cryptography and also resolution of combinatorial problems. The most famous variant, called reduced ordered binary decision diagram (ROBDD for short), can be viewed as the result of a compaction procedure on the full decision tree. A useful property is that once an order over the Boolean variables is fixed, each Boolean function is represented by exactly one ROBDD. In this paper we aim at computing the exact distribution of the Boolean functions in $k$ variables according to the ROBDD size}, where the ROBDD size is equal to the number of decision nodes of the underlying directed acyclic graph (DAG for short) structure. Recall the number of Boolean functions with $k$ variables is equal to $2^{2^k}$, which is of double exponential growth with respect to the number of variables. The maximal size of a ROBDD with $k$ variables is $M_k \approx 2^k / k$. Apart from the natural combinatorial explosion observed, another difficulty for computing the distribution according to size is to take into account dependencies within the DAG structure of ROBDDs. In this paper, we develop the first polynomial algorithm to derive the distribution of Boolean functions over $k$ variables with respect to ROBDD size denoted by $n$. The algorithm computes the (enumerative) generating function of ROBDDs with $k$ variables up to size $n$. It performs $O(k n^4)$ arithmetical operations on integers and necessitates storing $O((k+n) n^2)$ integers with bit length $O(n\log n)$. Our new approach relies on a decomposition of ROBDDs layer by layer and on an inclusion-exclusion argument.
An iterative approach for counting reduced ordered binary decision diagrams
The coherent dynamics of bubble clusters in liquid are of fundamental and industrial importance and are elusive due to the complex interactions of disordered bubble oscillations. Here we introduce and demonstrate unsupervised learning of the coherent physics by combining theory and principal component analysis. From data, the method extracts and quantifies coherent dynamical features based on their energy. We analyze simulation data sets of disordered clusters under harmonic excitation. Results suggest that the coherence is lowered by polydispersity and nonlinearity but in cavitating regimes underlying correlations can be isolated in a single cohererent mode characterized by mean-field interactions, regardless of the degree of disorders. Our study provides a valuable tool and a guidance for future studies on cavitation and nucleation in theory, simulation, and experiments.
Regressing bubble cluster dynamics as a disordered many-body system
In this paper, we prove global second derivative estimates for solutions of the Dirichlet problem for the Monge-Ampere equation when the inhomogeneous term is only assumed to be Holder continuous. As a consequence of our approach, we also establish the existence and uniqueness of globally smooth solutions to the second boundary value problem for the affine maximal surface equation and affine mean curvature equation.
Boundary regularity for the Monge-Ampere and affine maximal surface equations
For the Vlasov-Poisson equation with random uncertain initial data, we prove that the Landau damping solution given by the deterministic counterpart (Caglioti and Maffei, {\it J. Stat. Phys.}, 92:301-323, 1998) depends smoothly on the random variable if the time asymptotic profile does, under the smoothness and smallness assumptions similar to the deterministic case. The main idea is to generalize the deterministic contraction argument to more complicated function spaces to estimate derivatives in space, velocity and random variables. This result suggests that the random space regularity can persist in long-time even in time-reversible nonlinear kinetic equations.
A study of Landau damping with random initial inputs
After the considerable excitement caused by COVID-19 and the first telework measures, many management issues became apparent and some questions quickly arose, especially about efficiency of employees' work and conditions for successful telework adoption. This study focuses on the following questions: what is the new interest to telework for employees and what are the potential reasons for this? How much employees feel able to do their work remotely now and why? To answer these questions, we conducted a survey over several weeks, that involved employees coming from different industries (N=170), in order to collect their experience, skills and motivations for teleworking. The results show that adoption of telework is real for the respondents. Those who have experienced it present a strong motivation and a real capacity to use it, with almost no technological barriers. The theoretical model that we used, based on Technology Acceptance Models (TAM), has highlighted an important new factor of telework adoption: time saved in commuting. The study points out that adoption of telework could be sustainable for both public and private organizations, and this requires a critical examination and discussion.
What motivates people to telework? Exploratory study in a post-confinement context
We study extended infection fronts advancing over a spatially uniform susceptible population by solving numerically a diffusive Kermack McKendrick SIR model with a dichotomous spatially random transmission rate, in two dimensions. We find a non-trivial dynamic critical behavior in the mean velocity, in the shape, and in the rough geometry of the displacement field of the infective front as the disorder approaches a threshold value for spatial spreading of the infection.
Rough infection fronts in a random medium
This survey tries to investigate the truths and deficiencies of prevalent philosophy about Uncertainty Relations (UR) and Quantum Measurements (QMS). The respective philosophy, known as being eclipsed by unfinished controversies, is revealed to be grounded on six basic precepts. But one finds that all the respective precepts are discredited by insurmountable deficiencies. So, in regard to UR, the alluded philosophy discloses oneself to be an unjustified mythology. Then UR appear either as short-lived historical conventions or as simple and limited mathematical formulas, without any essential significance for physics. Such a finding reinforces the Dirac's prediction that UR "`in their present form will not survive in the physics of future"'. The noted facets of UR motivate reconsiderations of associated debates on QMS. Mainly one reveals that, properly, UR have not any essential connection with genuine descriptions of QMS. For such descriptions, it is necessary that, mathematically, the quantum observables to be considered as random variables. The measuring scenarios with a single sampling, such are wave function collapse or Schrodinger's cat thought experiment, are revealed as being useless exercises. We propose to describe QMS as transmission processes for stochastic data. Note that the above-announced revaluation of UR and QMS philosophy does not disturb in any way the practical framework of the usual quantum mechanics.
A survey on uncertainty relations and quantum measurements
Real-world time-series datasets often violate the assumptions of standard supervised learning for forecasting -- their distributions evolve over time, rendering the conventional training and model selection procedures suboptimal. In this paper, we propose a novel method, Self-Adaptive Forecasting (SAF), to modify the training of time-series forecasting models to improve their performance on forecasting tasks with such non-stationary time-series data. SAF integrates a self-adaptation stage prior to forecasting based on `backcasting', i.e. predicting masked inputs backward in time. This is a form of test-time training that creates a self-supervised learning problem on test samples before performing the prediction task. In this way, our method enables efficient adaptation of encoded representations to evolving distributions, leading to superior generalization. SAF can be integrated with any canonical encoder-decoder based time-series architecture such as recurrent neural networks or attention-based architectures. On synthetic and real-world datasets in domains where time-series data are known to be notoriously non-stationary, such as healthcare and finance, we demonstrate a significant benefit of SAF in improving forecasting accuracy.
Self-Adaptive Forecasting for Improved Deep Learning on Non-Stationary Time-Series
We study the radio--FIR correlation between the nonthermal (synchrotron) radio continuum emission at \lambda 90 cm (333 MHz) and the far infrared emission due to cool (~20 K) dust at \lambda 70\mu m in spatially resolved normal galaxies at scales of ~1 kpc. The slope of the radio--FIR correlation significantly differs between the arm and interarm regions. However, this change is not evident at a lower wavelength of \lambda 20 cm (1.4 GHz). We find the slope of the correlation in the arm to be 0.8 \pm 0.12 and we use this to determine the coupling between equipartition magnetic field (B_{eq}) and gas density (\rho_{gas}) as B_{eq} \propto \rho_{gas}^{0.51 \pm 0.12}. This is close to what is predicted by MHD simulations of turbulent ISM, provided the same region produces both the radio and far infrared emission. We argue that at 1 kpc scales this condition is satisfied for radio emission at 1.4 GHz and may not be satisfied at 333 MHz. Change of slope observed in the interarm region could be caused by propagation of low energy (~1.5 GeV) and long lived (~ 10^8 yr) cosmic ray electrons at 333 MHz.
Low frequency radio-FIR correlation in normal galaxies at ~1 kpc scales
We give a new method for solving a problem originally solved about 20 years ago by Sinnott and Kubert, namely that of computing the cohomology of the universal ordinary distribution with respect to the action of the two-element group generated by complex conjugation. We develop the method in sufficient generality so as to be able to calculate analogous cohomology groups in the function field setting which have not previously been calculated. In particular, we are able to confirm a conjecture of L.~S.~Yin conditional on which Yin was able to obtain results on unit indices generalizing those of Sinnott in the classical cyclotomic case and Galovich-Rosen in the Carlitz cyclotomic case. The Farrell-Tate cohomology theory for groups of finite virtual cohomological dimension plays a key role in our proof of Yin's conjecture. The methods developed in the paper have recently been used by P.~Das to illuminate the structure of the Galois group of the algebraic extension of the rational number field generated by the roots of unity and the algebraic $\Gamma$-monomials. This paper has appeared as Contemp. Math. 224 (1999) 1-27.
A double complex for computing the sign-cohomology of the universal ordinary distribution
We design and analyze multigrid methods for the saddle point problems resulting from Raviart-Thomas-N\'ed\'elec mixed finite element methods (of order at least 1) for the Darcy system in porous media flow. Uniform convergence of the $W$-cycle algorithm in a nonstandard energy norm is established. Extensions to general second order elliptic problems are also addressed.
Multigrid Methods for Saddle Point Problems: Darcy Systems
We answer a question of Brass about vertex degrees in unit distance graphs of finitely generated additive subgroups of $\mathbb{R}^2$.
Unit distance graphs and algebraic integers
Correlation measurements imply that anisotropic flow in nuclear collisions includes a novel triangular component along with the more familiar elliptic-flow contribution. Triangular flow has been attributed to event-wise fluctuations in the initial shape of the collision volume. We ask two questions: 1) How do these shape fluctuations impact other event-by-event observables? 2) Can we disentangle fundamental information on the early time fluctuations from the complex flow that results? We study correlation and fluctuation observables in a framework in which flux tubes in an early Glasma stage later produce hydrodynamic flow. Calculated multiplicity and transverse momentum fluctuations are in excellent agreement with data from 62.4 GeV Au+Au up to 2.76 TeV Pb+Pb.
Fluctuation Probes of Early-Time Correlations in Nuclear Collisions
This paper describes the generation of initial conditions for numerical simulations in cosmology with multiple levels of resolution, or multiscale simulations. We present the theory of adaptive mesh refinement of Gaussian random fields followed by the implementation and testing of a computer code package performing this refinement called GRAFIC2. This package is available to the computational cosmology community at http://arcturus.mit.edu/grafic/ or by email from the author.
Multiscale Gaussian Random Fields for Cosmological Simulations
In this paper an equation of state of neutron star matter which includes strange baryons in the framework of Zimanyi and Moszkowski (ZM) model has been obtained. We concentrate on the effects of the isospin dependence of the equation of state constructing for the appropriate choices of parameters the hyperons star model. Numerous neutron star models show that the appearance of hyperons is connected with the increasing density in neutron star interiors. Various studies have indicated that the inclusion of delta meson mainly affects the symmetry energy and through this the chemical composition of a neutron star. As the effective nucleon mass contributes to hadron chemical potentials it alters the chemical composition of the star. In the result the obtained model of the star not only excludes large population of hadrons but also does not reduce significantly lepton contents in the star interior.
The extended, relativistic hyperon star model
There are at least two ways to deduce Einstein's field equations from the principle of maximum force $c^4/4G$ or from the equivalent principle of maximum power $c^5/4G$. Tests in gravitational wave astronomy, cosmology, and numerical gravitation confirm the two principles. Apparent paradoxes about the limits can all be resolved. Several related bounds arise. The limits illuminate the beauty, consistency and simplicity of general relativity from an unusual perspective.
From maximum force to the field equations of general relativity -- and implications
Data augmentation has emerged as a powerful technique for improving the performance of deep neural networks and led to state-of-the-art results in computer vision. However, state-of-the-art data augmentation strongly distorts training images, leading to a disparity between examples seen during training and inference. In this work, we explore a recently proposed training paradigm in order to correct for this disparity: using an auxiliary BatchNorm for the potentially out-of-distribution, strongly augmented images. Our experiments then focus on how to define the BatchNorm parameters that are used at evaluation. To eliminate the train-test disparity, we experiment with using the batch statistics defined by clean training images only, yet surprisingly find that this does not yield improvements in model performance. Instead, we investigate using BatchNorm parameters defined by weak augmentations and find that this method significantly improves the performance of common image classification benchmarks such as CIFAR-10, CIFAR-100, and ImageNet. We then explore a fundamental trade-off between accuracy and robustness coming from using different BatchNorm parameters, providing greater insight into the benefits of data augmentation on model performance.
Does Data Augmentation Benefit from Split BatchNorms
We model the population characteristics of the sample of millisecond pulsars within a distance of 1.5kpc.We find that for a braking index n=3, the birth magnetic field distribution of the neutron stars as they switch on as radio MSPs can be represented by a Gaussian with mean $\log B(G)= 8.1$ and $\sigma_{\log B}=0.4$ and their birth spin period by a Gaussian with mean $P_0=4$ ms and $\sigma_{P_0}=1.3$ ms. Our study, which takes into consideration acceleration effects on the observed spin-down rate, shows that most MSPs are born with periods that are close to the currently observed values and with average characteristic ages typically larger by a factor 1.5 compared to the true age. The Galactic birth rate of the MSPs is deduced to be $\gsimeq 3.2 \times 10^{-6}$ yr$^{-1}$ near the upper end of previous estimates and larger than the semi-empirical birth rate $\sim 10^{-7}$ yr$^{-1}$ of the LMXBs. The mean birth spin period deduced by us for the radio MSPs is a factor 2 higher than the mean spin period observed for the accretion and nuclear powered X-ray pulsars, although this discrepancy can be resolved if we use a braking index $n=5$, the value appropriate to spin down caused by angular momentum losses by gravitational radiation or magnetic multipolar radiation. We discuss the arguments for and against the hypothesis that accretion induced collapse may constitute the main route to the formation of the MSPs, pointing out that on the AIC scenario the low magnetic fields of the MSPs may simply reflect the field distribution in isolated magnetic white dwarfs which has recently been shown to be bi-modal with a dominant component that is likely to peak at fields below $10^3$ G which would scale to neutron star fields below $10^9$ G.
The birth properties of Galactic millisecond radio pulsars
The representation space of pretrained Language Models (LMs) encodes rich information about words and their relationships (e.g., similarity, hypernymy, polysemy) as well as abstract semantic notions (e.g., intensity). In this paper, we demonstrate that lexical stylistic notions such as complexity, formality, and figurativeness, can also be identified in this space. We show that it is possible to derive a vector representation for each of these stylistic notions from only a small number of seed pairs. Using these vectors, we can characterize new texts in terms of these dimensions by performing simple calculations in the corresponding embedding space. We conduct experiments on five datasets and find that static embeddings encode these features more accurately at the level of words and phrases, whereas contextualized LMs perform better on sentences. The lower performance of contextualized representations at the word level is partially attributable to the anisotropy of their vector space, which can be corrected to some extent using techniques like standardization.
Representation Of Lexical Stylistic Features In Language Models' Embedding Space
Let $A,B,C,D$ be rational numbers such that $ABC \neq 0$, and let $n_1>n_2>n_3>0$ be positive integers. We solve the equation $$ Ax^{n_1}+Bx^{n_2}+Cx^{n_3}+D = f(g(x)),$$ in $f,g \in \mathbb{Q}[x]$. In sequel we use Bilu-Tichy method to prove finitness of integral solutions of the equations $$ Ax^{n_1}+Bx^{n_2}+Cx^{n_3}+D = Ey^{m_1}+Fy^{m_2}+Gy^{m_3}+H, $$ where $A,B,C,D,E,F,G,H$ are rational numbers $ABCEFG \neq 0$ and $n_1>n_2>n_3>0$, $m_1>m_2>m_3>0$, $\gcd(n_1,n_2,n_3) = \gcd(m_1,m_2,m_3)=1$ and $n_1,m_1 \geq 9$. And the equation $$ A_1x^{n_1}+A_2x^{n_2}+\ldots+A_l x^{n_l} + A_{l+1} = Ey^{m_1}+Fy^{m_2}+Gy^{m_3}, $$ where $l \geq 4$ is fixed integer, $A_1,\ldots,A_{l+1},E,F,G$ are non-zero rational numbers, except for possibly $A_{l+1}$, $n_1>n_2>\ldots > n_l>0$, $m_1>m_2>m_3>0$ are positive integers such that $\gcd(n_1,n_2, \ldots n_l) = \gcd(m_1,m_2,m_3)=1$, and $n_1 \geq 4$, $m_1 \geq 2l(l-1)$.
On decompositions of quadrinomials and related Diophantine equations
There is tremendous global enthusiasm for research, development, and deployment of autonomous vehicles (AVs), e.g., self-driving taxis and trucks from Waymo and Baidu. The current practice for testing AVs uses virtual tests-where AVs are tested in software simulations-since they offer a more efficient and safer alternative compared to field operational tests. Specifically, search-based approaches are used to find particularly critical situations. These approaches provide an opportunity to automatically generate tests; however, systematically creating valid and effective tests for AV software remains a major challenge. To address this challenge, we introduce scenoRITA, a test generation approach for AVs that uses evolutionary algorithms with (1) a novel gene representation that allows obstacles to be fully mutable, hence, resulting in more reported violations, (2) 5 test oracles to determine both safety and motion sickness-inducing violations, and (3) a novel technique to identify and eliminate duplicate tests. Our extensive evaluation shows that scenoRITA can produce effective driving scenarios that expose an ego car to safety critical situations. scenoRITA generated tests that resulted in a total of 1,026 unique violations, increasing the number of reported violations by 23.47% and 24.21% compared to random test generation and state-of-the-art partially-mutable test generation, respectively.
scenoRITA: Generating Less-Redundant, Safety-Critical and Motion Sickness-Inducing Scenarios for Autonomous Vehicles
The scattering phase shift of an electron transferred through a quantum dot is studied within a model Hamiltonian, accounting for both the electron--electron interaction in the dot and a finite temperature. It is shown that, unlike in an independent electron picture, this phase may exhibit a phase lapse of $ \pi $ {\em between } consecutive resonances under generic circumstances.
Electron Scattering Through a Quantum Dot: A Phase Lapse Mechanism
We present a new class of component-wise numerical schemes that are in the family of relaxation formulations, originally introduced by [S. Jin and Z. P. Xin, Comm. Pure Appl. Math., 48(1995), pp. 235-277]. The relaxation framework enables the construction of schemes that are free of nonlinear Riemann solvers and are independent of the underlying eigenstructure of the problem. The constant relaxation schemes proposed by Jin & Xin can however introduce strong numerical diffusion, especially when the maximum characteristic speeds are high compared to the average speeds in the domain. We propose a general class of variable relaxation formulations for multidimensional systems of conservation laws which utilizes estimates of local maximum and minimum speeds to arrive at more accurate relaxation schemes, irrespective of the contrast in maximum and average characteristic speeds. First and second order variable relaxation methods are presented for general nonlinear systems in one and two spatial dimensions, along with monotonicity and TVD (Total Variation Diminishing) properties for the 1D schemes. The effectiveness of the schemes is demonstrated on a test suite that includes Burgers' equation, the weakly hyperbolic Engquist-Runborg problem, as well as the weakly hyperbolic gas injection displacements that are governed by strong nonlinear coupling thus making them highly sensitive to numerical diffusion. In the latter examples the second order Jin-Xin scheme fails to capture the fronts reasonably, when both the first and second order variable relaxed schemes produce the displacement profiles sharply.
Variable relaxed schemes for multidimensional hyperbolic conservation laws
With the discovery of the Sagittarius dwarf spheroidal (Ibata et al. 1994), a galaxy caught in the process of merging with the Milky Way, the hunt for other such accretion events has become a very active field of astrophysical research. The identification of a stellar ring-like structure in Monoceros, spanning more than 100 degrees (Newberg et al. 2002), and the detection of an overdensity of stars in the direction of the constellation of Canis Major (CMa, Martin et al. 2004), apparently associated to the ring, has led to the widespread belief that a second galaxy being cannibalised by the Milky Way had been found. In this scenario, the overdensity would be the remaining core of the disrupted galaxy and the ring would be the tidal debris left behind. However, unlike the Sagittarius dwarf, which is well below the Galactic plane and whose orbit, and thus tidal tail, is nearly perpendicular to the plane of the Milky Way, the putative CMa galaxy and ring are nearly co-planar with the Galactic disk. This severely complicates the interpretation of observations. In this letter, we show that our new description of the Milky Way leads to a completely different picture. We argue that the Norma-Cygnus spiral arm defines a distant stellar ring crossing Monoceros and the overdensity is simply a projection effect of looking along the nearby local arm. Our perspective sheds new light on a very poorly known region, the third Galactic quadrant (3GQ), where CMa is located.
Spiral structure of the Third Galactic Quadrant and the solution to the Canis Major debate
In the past decade, complex networks of light emitters are proposed as novel platforms for photonic circuits and lab-on-chip active devices. Lasing networks made by connected multiple gain components and graphs of nanoscale random lasers (RLs) obtained from complex meshes of polymeric nanofibers are successful prototypes. However, in the reported research, mainly collective emission from a whole network of resonators is investigated, and only in a few cases, the emission from single points showing, although homogeneous and broad, spatial emission. In all cases, simultaneous activation of the miniaturized lasers is observed. Here, differently, we realize heterogeneous random lasers made of ribbon-like and highly porous fibers with evident RL action from separated micrometric domains that alternatively switch on and off by tuning the pumping light intensity. We visualize this novel effect by building for the first time replica symmetry breaking (RSB) maps of the emitting fibers with 2 {\mu}m spatial resolution. In addition, we calculate the spatial correlations of the laser regions showing clearly an average extension of 50 {\mu}m. The observed blinking effect is due to mode interaction along light guiding fibers and opens new avenues in the fabrication of flexible photonic networks with specific and adaptable activity.
Heterogeneous Random Laser with Switching Activity Visualized by Replica Symmetry Breaking Maps
With the discovery of now more than 500 exoplanets, we present a statistical analysis of the planetary orbital periods and their relationship to the rotation periods of their parent stars. We test whether the structure of planetary orbits, i.e. planetary angular momentum and orbital periods are 'quantized' in integer or half-integer multiples with respect to the parent stars' rotation period. The Solar System is first shown to exhibit quantized planetary orbits that correlate with the Sun's rotation period. The analysis is then expanded over 443 exoplanets to statistically validate this quantization and its association with stellar rotation. The results imply that the exoplanetary orbital periods are highly correlated with the parent star's rotation periods and follow a discrete half-integer relationship with orbital ranks n=0.5, 1.0, 1.5, 2.0, 2.5, etc. The probability of obtaining these results by pure chance is p<0.024. We discuss various mechanisms that could justify this planetary quantization, such as the hybrid gravitational instability models of planet formation, along with possible physical mechanisms such as inner discs magnetospheric truncation, tidal dissipation, and resonance trapping. In conclusion, we statistically demonstrate that a quantized orbital structure should emerge naturally from the formation processes of planetary systems and that this orbital quantization is highly dependent on the parent stars rotation periods.
Quantization of Planetary Systems and its Dependency on Stellar Rotation
DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair results, the instance segmentation performance is noticeably inferior to previous works. By diving into the details, we observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain. Considering instances in 3D are more featured by their positional information, we emphasize their roles during the modeling and design a robust Mixed-parameterized Positional Embedding (MPE) to guide the segmentation process. It is embedded into backbone features and later guides the mask prediction and query update processes iteratively, leading to Position-Aware Segmentation (PA-Seg) and Masked Focal Attention (MFA). All these designs impel the queries to attend to specific regions and identify various instances. The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% PQ on SemanticKITTI and nuScenes benchmark, respectively. The source code and models are available at https://github.com/SmartBot-PJLab/P3Former .
Position-Guided Point Cloud Panoptic Segmentation Transformer
The Set-Union Knapsack Problem (SUKP) and Budgeted Maximum Coverage Problem (BMCP) are two closely related variant problems of the popular knapsack problem. Given a set of weighted elements and a set of items with nonnegative values, where each item covers several distinct elements, these two problems both aim to find a subset of items that maximizes an objective function while satisfying a knapsack capacity (budget) constraint. We propose an efficient and effective local search algorithm called E2LS for these two problems. To our knowledge, this is the first time that an algorithm has been proposed for both of them. E2LS trade-offs the search region and search efficiency by applying a proposed novel operator ADD$^*$ to traverse the refined search region. Such a trade-off mechanism allows E2LS to explore the solution space widely and quickly. The tabu search method is also applied in E2LS to help the algorithm escape from local optima. Extensive experiments on a total of 168 public instances with various scales demonstrate the excellent performance of the proposed algorithm for both the SUKP and BMCP.
Efficient and Effective Local Search for the Set-Union Knapsack Problem and Budgeted Maximum Coverage Problem
A toy-model of publications and citations processes is proposed. The model shows that the role of randomness in the processes is essential and cannot be ignored. Some other aspects of scientific publications rating are discussed.
One look at the rating of scientific publications and corresponding toy-model
The existing estimation of the upper critical dimension of the Abelian Sandpile Model is based on a qualitative consideration of avalanches as self-avoiding branching processes. We find an exact representation of an avalanche as a sequence of spanning sub-trees of two-component spanning trees. Using equivalence between chemical paths on the spanning tree and loop-erased random walks, we reduce the problem to determination of the fractal dimension of spanning sub-trees. Then, the upper critical dimension $d_u=4$ follows from Lawler's theorems for intersection probabilities of random walks and loop-erased random walks.
The Upper Critical Dimension of the Abelian Sandpile Model
A self-similar spherical collapse model predicts a dark matter (DM) splashback and accretion shock in the outskirts of galaxy clusters while misses a key ingredient of structure formation - processes associated with mergers. To fill this gap, we perform simulations of merging self-similar clusters and investigate their DM and gas evolution in an idealized cosmological context. Our simulations show that the cluster rapidly contracts during the major merger and the splashback radius $r_{\rm sp}$ decreases, approaching the virial radius $r_{\rm vir}$. While $r_{\rm sp}$ correlates with a smooth mass accretion rate (MAR) parameter $\Gamma_{\rm s}$ in the self-similar model, our simulations show a similar trend with the total MAR $\Gamma_{\rm vir}$ (includes both mergers and $\Gamma_{\rm s}$). The scatter of the $\Gamma_{\rm vir}-r_{\rm sp}/r_{\rm vir}$ relation indicates a generally low $\Gamma_{\rm s}\sim1$ in clusters in cosmological simulations. In contrast to the DM, the hot gaseous atmospheres significantly expand by the merger-accelerated (MA-) shocks formed when the runaway merger shocks overtake the outer accretion shock. After a major merger, the MA-shock radius is larger than $r_{\rm sp}$ by a factor of up to $\sim1.7$ for $\Gamma_{\rm s}\lesssim1$ and is $\sim r_{\rm sp}$ for $\Gamma_{\rm s}\gtrsim3$. This implies that (1) mergers could easily generate the MA-shock-splashback offset measured in cosmological simulations, and (2) the smooth MAR is small in regions away from filaments where MA-shocks reside. We further discuss various shocks and contact discontinuities formed at different epochs of the merger, the ram pressure stripping in cluster outskirts, and the dependence of member galaxies' splashback feature on their orbital parameters.
Evolution of Splashback Boundaries and Gaseous Outskirts: Insights from Mergers of Self-similar Galaxy Clusters
We have successfully grown one of the higher manganese silicides, Mn4Si7 thin films on silicon (100) substrates using an ultra-high vacuum deposition with a base pressure of 1x10-9 torr. The thickness of the film was varied from 65-100 nm. These films exhibit a tetragonal crystal structure and display paramagnetic behavior as predicted for the stoichiometric Mn4Si7 system. They have a resistivity of 3.321 x 10-5 ohm-m at room temperature and show a semi-metallic nature.
Ultra-high Vacuum Deposition of Higher Manganese Silicide Mn4Si7 Thin Films
$\lambda$-Scale is an enrichment of lambda calculus which is adapted to emergent algebras. It can be used therefore in metric spaces with dilations.
$\lambda$-Scale, a lambda calculus for spaces with dilations
The inter oxygen repulsion opposes compression minimizing the compressibility. Polarization enlarges the bandgap and the dielectric permittivity of water ice by raising the nonbonding states above the Fermi energy. Progress evidences the efficiency and essentiality of the coupled hydrogen bonding and electronic dynamics in revealing the core physics and chemistry of water ice, which could extend to other molecular crystals such as energetic materials.
Water Ice Compression: Principles and Applications
This paper develops detailed mathematical statistical theory of a new class of cross-validation techniques of local linear kernel hazards and their multiplicative bias corrections. The new class of cross-validation combines principles of local information and recent advances in indirect cross-validation. A few applications of cross-validating multiplicative kernel hazard estimation do exist in the literature. However, detailed mathematical statistical theory and small sample performance are introduced via this paper and further upgraded to our new class of best one-sided cross-validation. Best one-sided cross-validation turns out to have excellent performance in its practical illustrations, in its small sample performance and in its mathematical statistical theoretical performance.
Multiplicative local linear hazard estimation and best one-sided cross-validation
Insertion and deletion (insdel for short) errors are synchronization errors in communication systems caused by the loss of positional information in the message. Reed-Solomon codes have gained a lot of interest due to its encoding simplicity, well structuredness and list-decoding capability in the classical setting. This interest also translates to the insdel metric setting, as the Guruswami-Sudan decoding algorithm can be utilized to provide a deletion correcting algorithm in the insdel metric. Nevertheless, there have been few studies on the insdel error-correcting capability of Reed-Solomon codes. Our main contributions in this paper are explicit constructions of two families of 2-dimensional Reed-Solomon codes with insdel error-correcting capabilities asymptotically reaching those provided by the Singleton bound. The first construction gives a family of Reed-Solomon codes with insdel error-correcting capability asymptotic to its length. The second construction provides a family of Reed Solomon codes with an exact insdel error-correcting capability up to its length. Both our constructions improve the previously known construction of 2-dimensional Reed-Solomon codes whose insdel error-correcting capability is only logarithmic on the code length.
Explicit Constructions of Two-Dimensional Reed-Solomon Codes in High Insertion and Deletion Noise Regime
Non-destructive and rapid evaluation of graphene directly on the growth substrate (Cu foils) by dark field (DF) optical microscopy is demonstrated. Without any additional treatment, graphene on Cu foils with various coverages can be quickly identified by DF imaging immediately after chemical vapor deposition growth with contrast comparable to scanning electron microscopy. The improved contrast of DF imaging compared to bright field optical imaging was found to be due to Rayleigh scattering of light by the copper steps beneath graphene. Indeed, graphene adlayers are readily distinguished, due to the different height of copper steps beneath graphene regions of different thickness.
Non-destructive and Rapid Evaluation of CVD Graphene by Dark Field Optical Microscopy

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
4
Add dataset card