text
stringlengths
121
2.54k
summary
stringlengths
23
219
We comment and discuss the findings and conclusions of a recent theoretical study of the diffraction of He atoms from a monolayer of Xe atoms adsorbed on the graphite (0001) surface [Khokonov et al., Surf. Sci. 496(2002)L13]. By revisiting the problem we demonstrate that all main conclusions of Khokonov et al. that pertain to the studied system are at variance with the available experimental and theoretical evidence and the results of multiple scattering calculations presented in this comment.
Diffraction of He atoms from Xe monolayer adsorbed on the graphite (0001) revisited: The importance of multiple scattering processes
Pairing between spinless fermions can generate Majorana fermion excitations that exhibit intriguing properties arising from non-local correlations. But simple models indicate that non-local correlation between Majorana fermions becomes unstable at non-zero temperatures. We address this issue by showing that anisotropic interactions between dipolar fermions in optical lattices can be used to significantly enhance thermal stability. We construct a model of oriented dipolar fermions in a square optical lattice. We find that domains established by strong interactions exhibit enhanced correlation between Majorana fermions over large distances and long times even at finite temperatures, suitable for stable redundancy encoding of quantum information. Our approach can be generalized to a variety of configurations and other systems, such as quantum wire arrays.
Enhancing the Thermal Stability of Majorana Fermions with Redundancy Using Dipoles in Optical Lattices
Chinese dynastic histories form a large continuous linguistic space of approximately 2000 years, from the 3rd century BCE to the 18th century CE. The histories are documented in Classical (Literary) Chinese in a corpus of over 20 million characters, suitable for the computational analysis of historical lexicon and semantic change. However, there is no freely available open-source corpus of these histories, making Classical Chinese low-resource. This project introduces a new open-source corpus of twenty-four dynastic histories covered by Creative Commons license. An original list of Classical Chinese gender-specific terms was developed as a case study for analyzing the historical linguistic use of male and female terms. The study demonstrates considerable stability in the usage of these terms, with dominance of male terms. Exploration of word meanings uses keyword analysis of focus corpora created for genderspecific terms. This method yields meaningful semantic representations that can be used for future studies of diachronic semantics.
Corpus of Chinese Dynastic Histories: Gender Analysis over Two Millennia
We investigate the zero-temperature phase diagram and spin-wave properties of a double exchange magnet with on-site Hubbard repulsion. It is shown that even within a simple Hartree -- Fock approach this interaction (which is often omitted in theoretical treatments) leads to qualitatively important effects which are highly relevant in the context of experimental data for the colossal magnetoresistance compounds. These include the asymmetry of the doping dependence of spin stiffness, and the zone-boundary ``softening'' of spin wave dispersion. Effects of Hubbard repulsion on phase separation are analyzed as well. We also show that in the ferromagnetic phase, an unusual temperature-dependent effective electron-electron interaction arises at finite T. The mean-field scheme, however, does not yield the experimentally observed density of states depletion near the Fermi level. We speculate that proper treatment of electron-electron interactions may be necessary for understanding both this important feature and more generally the physics of colossal magnetoresistance phenomenon.
Effects of the On-Site Coulomb Repulsion in Double Exchange Magnets
We present a representative set of analytic stationary state solutions of the Nonlinear Schr\"odinger equation for a symmetric double square well potential for both attractive and repulsive nonlinearity. In addition to the usual symmetry preserving even and odd states, nonlinearity introduces quite exotic symmetry breaking solutions - among them are trains of solitons with different number and sizes of density lumps in the two wells. We use the symmetry breaking localized solutions to form macroscopic quantum superpositions states and explore a simple model for the exponentially small tunneling splitting.
Bose-Einstein condensates in a one-dimensional double square well: Analytical solutions of the Nonlinear Schr\"odinger equation and tunneling splittings
The very recent Boomerang results give an estimate of unprecedented precision of the Cosmic Microwave Background anisotropies on sub--degree scales. A puzzling feature for theoretical cosmology is the low amplitude of the second acoustic peak. Through a qualitative discussion, we argue that a scarcely considered category of flat models, with a leptonic asymmetry, a high baryon density and a low cosmological constant seems to be in very good agreement with the data, while still being compatible with big bang nucleosynthesis and some other observational constraints. Although this is certainly not the only way to explain the data, we believe that these models deserve to be included in forthcoming likelihood analyses.
Remarks on the Boomerang results, the baryon density and the leptonic asymmetry
In a previous paper, using ergodic theory, Lo [1] derived a simple definite integral that provided an estimate of the view periods of ground stations to satellites. This assumes the satellites are in circular orbits with non-repeating ground tracks under linear $J_2$ perturbations. The novel feature is that this is done without the propagation of the trajectory by employing ergodic theory. This accelerated the telecommunications mission design and analysis by several orders of magnitude and greatly simplified the process. In this paper, we extend the view period integral to elliptical orbits.
The Long-Term Forecast of Station View Periods for Elliptical Orbits
The massive upload of text on the internet creates a huge inverted index in information retrieval systems, which hurts their efficiency. The purpose of this research is to measure the effect of the Multi-Layer Similarity model of the automatic text summarization on building an informative and condensed invert index in the IR systems. To achieve this purpose, we summarized a considerable number of documents using the Multi-Layer Similarity model, and we built the inverted index from the automatic summaries that were generated from this model. A series of experiments were held to test the performance in terms of efficiency and relevancy. The experiments include comparisons with three existing text summarization models; the Jaccard Coefficient Model, the Vector Space Model, and the Latent Semantic Analysis model. The experiments examined three groups of queries with manual and automatic relevancy assessment. The positive effect of the Multi-Layer Similarity in the efficiency of the IR system was clear without noticeable loss in the relevancy results. However, the evaluation showed that the traditional statistical models without semantic investigation failed to improve the information retrieval efficiency. Comparing with the previous publications that addressed the use of summaries as a source of the index, the relevancy assessment of our work was higher, and the Multi-Layer Similarity retrieval constructed an inverted index that was 58% smaller than the main corpus inverted index.
The Effect of the Multi-Layer Text Summarization Model on the Efficiency and Relevancy of the Vector Space-based Information Retrieval
The purpose of the present research is to investigate model mixed boundary value problems for the Helmholtz equation in a planar angular domain $\Omega_\alpha\subset\mathbb{R}^2$ of magnitude $\alpha$. The BVP is considered in a non-classical setting when a solution is sought in the Bessel potential spaces $\mathbb{H}^s_p(\Omega_\alpha)$, $s>1/p$, $1<p<\infty$. The problems are investigated using the potential method by reducing them to an equivalent boun\-dary integral equation (BIE) in the Sobolev-Slobode\v{c}kii space on a semi-infinite axes $\bW^{s-1/p}_p(\bR^+)$, which is of Mellin convolution type. By applying the recent results on Mellin convolution equations in the Bessel potential spaces obtained by V. Didenko \& R. Duduchava in \cite{DD16}, explicit conditions of the unique solvability of this BIE in the Sobolev-Slobode\v{c}kii $\bW^r_p(\bR^+)$ and Bessel potential $\bH^r_p(\mathbb{R}^+)$ spaces for arbitrary $r$ are found and used to write explicit conditions for the Fredhoilm property and unique solvability of the initial model BVPs for the Helmholtz equation in the above mentioned non-classical setting. The same problem was investigated in the foregoing paper of the authors published in 2013, but there was made fatal errors. In the present paper we correct these results.
Mixed boundary value problems for the Helmholtz equation in a model 2D angular domain
We have studied the Metal-Insulator like Transition (MIT) in lithium and beryllium ring-shaped clusters through ab initio Density Matrix Renormalization Group (DMRG) method. Performing accurate calculations for different interatomic distances and using Quantum Information Theory (QIT) we investigated the changes occurring in the wavefunction between a metallic-like state and an insulating state built from free atoms. We also discuss entanglement and relevant excitations among the molecular orbitals in the Li and Be rings and show that the transition bond length can be detected using orbital entropy functions. Also, the effect of different orbital basis on the effectiveness of the DMRG procedure is analyzed comparing the convergence behavior.
Investigation of metal-insulator like transition through the ab initio density matrix renormalization group approach
This paper presents an analysis of competition between generators when incentive-based demand response is employed in an electricity market. Thermal and hydropower generation are considered in the model. A smooth inverse demand function is designed using a sigmoid and two linear functions for modeling the consumer preferences under incentive-based demand response program. Generators compete to sell energy bilaterally to consumers and system operator provides transmission and arbitrage services. The profit of each agent is posed as an optimization problem, then the competition result is found by solving simultaneously Karush-Kuhn-Tucker conditions for all generators. A Nash-Cournot equilibrium is found when the system operates normally and at peak demand times when DR is required. Under this model, results show that DR diminishes the energy consumption at peak periods, shifts the power requirement to off-peak times and improves the net consumer surplus due to incentives received for participating in DR program. However, the generators decrease their profit due to the reduction of traded energy and market prices.
A novel incentive-based demand response model for Cournot competition in electricity markets
Recently, transformer-based methods have achieved impressive results on Video Instance Segmentation (VIS). However, most of these top-performing methods run in an offline manner by processing the entire video clip at once to predict instance mask volumes. This makes them incapable of handling the long videos that appear in challenging new video instance segmentation datasets like UVO and OVIS. We propose a fully online transformer-based video instance segmentation model that performs comparably to top offline methods on the YouTube-VIS 2019 benchmark and considerably outperforms them on UVO and OVIS. This method, called Robust Online Video Segmentation (ROVIS), augments the Mask2Former image instance segmentation model with track queries, a lightweight mechanism for carrying track information from frame to frame, originally introduced by the TrackFormer method for multi-object tracking. We show that, when combined with a strong enough image segmentation architecture, track queries can exhibit impressive accuracy while not being constrained to short videos.
Robust Online Video Instance Segmentation with Track Queries
Collective ferromagnetic motion in a conducting medium is damped by the transfer of the magnetic moment and energy to the itinerant carriers. We present a calculation of the corresponding magnetization relaxation as a linear-response problem for the carrier dynamics in the effective exchange field of the ferromagnet. In electron systems with little intrinsic spin-orbit interaction, a uniform magnetization motion can be formally eliminated by going into the rotating frame of reference for the spin dynamics. The ferromagnetic damping in this case grows linearly with the spin-flip rate when the latter is smaller than the exchange field and is inversely proportional to the spin-flip rate in the opposite limit. These two regimes are analogous to the "spin-pumping" and the "breathing Fermi-surface" damping mechanisms, respectively. In diluted ferromagnetic semiconductors, the hole-mediated magnetization can be efficiently relaxed to the itinerant-carrier degrees of freedom due to the strong spin-orbit interaction in the valence bands.
Mean-field magnetization relaxation in conducting ferromagnets
The paper aims to study relation between the distributions of the young stellar objects (YSOs) of different ages and the gas-dust constituents of the S254-S258 star-formation complex. This is necessary to study the time evolution of the YSO distribution with respect to the gas and dust compounds which are responsible for the birth of the young stars. For this purpose we use correlation analysis between different gas, dust and YSOs tracers. We compared the large-scale CO, HCO$^+$, near-IR extinction, and far-IR {\it Herschel} maps with the density of YSOs of the different evolutionary Classes. The direct correlation analysis between these maps was used together with the wavelet-based spatial correlation analysis. This analysis reveals a much tighter correlation of the gas-dust tracers with the distribution of Class I YSOs than with that of Class II YSOs. We argue that Class I YSOs which were initially born in the central bright cluster S255-IR (both N and S parts) during their evolution to Class II stage ($\sim$2 Myr) had enough time to travel through the whole S254-S258 star-formation region. Given that the region contains several isolated YSO clusters, the evolutionary link between these clusters and the bright central S255-IR (N and S) cluster can be considered. Despite the complexity of the YSO cluster formation in the non-uniform medium, the clusters of Class II YSOs in the S254-258 star-formation region can contain objects born in the different locations of the complex.
The link between gas and stars in the S254-S258 star-forming region
The lowest stationary quantum state of neutrons in the Earth's gravitational field is identified in the measurement of neutron transmission between a horizontal mirror on the bottom and an absorber on top. Such an assembly is not transparent for neutrons if the absorber height is smaller than the "height" of the lowest quantum state.
Measurement of quantum states of neutrons in the Earth's gravitational field
We introduce a novel functional time series methodology for short-term load forecasting. The prediction is performed by means of a weighted average of past daily load segments, the shape of which is similar to the expected shape of the load segment to be predicted. The past load segments are identified from the available history of the observed load segments by means of their closeness to a so-called reference load segment, the later being selected in a manner that captures the expected qualitative and quantitative characteristics of the load segment to be predicted. Weak consistency of the suggested functional similar shape predictor is established. As an illustration, we apply the suggested functional time series forecasting methodology to historical daily load data in Cyprus and compare its performance to that of a recently proposed alternative functional time series methodology for short-term load forecasting.
Short-Term Load Forecasting: The Similar Shape Functional Time Series Predictor
Configurable systems are those that can be adapted from a set of options. They are prevalent and testing them is important and challenging. Existing approaches for testing configurable systems are either unsound (i.e., they can miss fault-revealing configurations) or do not scale. This paper proposes EvoSPLat, a regression testing technique for configurable systems. EvoSPLat builds on our previously-developed technique, SPLat, which explores all dynamically reachable configurations from a test. EvoSPLat is tuned for two scenarios of use in regression testing: Regression Configuration Selection (RCS) and Regression Test Selection (RTS). EvoSPLat for RCS prunes configurations (not tests) that are not impacted by changes whereas EvoSPLat for RTS prunes tests (not configurations) which are not impacted by changes. Handling both scenarios in the context of evolution is important. Experimental results show that EvoSPLat is promising. We observed a substantial reduction in time (22%) and in the number of configurations (45%) for configurable Java programs. In a case study on a large real-world configurable system (GCC), EvoSPLat reduced 35% of the running time. Comparing EvoSPLat with sampling techniques, 2-wise was the most efficient technique, but it missed two bugs whereas EvoSPLat detected all bugs four times faster than 6-wise, on average.
Time-Space Efficient Regression Testing for Configurable Systems
The X-ray emission of O-type stars was first discovered in the early days of the Einstein satellite. Since then many different surveys have confirmed that the ratio of X-ray to bolometric luminosity in O-type stars is roughly constant, but there is a paucity of studies that account for detailed information on spectral and wind properties of O-stars. Recently a significant sample of O stars within our Galaxy was spectroscopically identified and presented in the Galactic O-Star Spectroscopic Survey (GOSS). At the same time, a large high-fidelity catalog of X-ray sources detected by the XMM-Newton X-ray telescope was released. Here we present the X-ray catalog of O stars with known spectral types and investigate the dependence of their X-ray properties on spectral type as well as stellar and wind parameters. We find that, among the GOSS sample, 127 O-stars have a unique XMM-Newton source counterpart and a Gaia data release 2 (DR2) association. Terminal velocities are known for a subsample of 35 of these stars. We confirm that the X-ray luminosities of dwarf and giant O stars correlate with their bolometric luminosity. For the subsample of O stars with measure terminal velocities we find that the X-ray luminosities of dwarf and giant O stars also correlate with wind parameters. However, we find that these correlations break down for supergiant stars. Moreover, we show that supergiant stars are systematically harder in X-rays compared to giant and dwarf O-type stars. We find that the X-ray luminosity depends on spectral type, but seems to be independent of whether the stars are single or in a binary system. Finally, we show that the distribution of log(Lx/Lbol) in our sample stars is non-Gaussian, with the peak of the distribution at log(Lx/Lbol) around -6.6.
The X-ray catalog of spectroscopically identified Galactic O stars: Investigating the dependence of X-ray luminosity on stellar and wind parameters
It is shown by particle-in-cell simulation that intense circularly polarized (CP) laser light can be contained in the cavity of a solid-density circular Al-plasma shell for hundreds of light-wave periods before it is dissipated by laser-plasma interaction. A right-hand CP laser pulse can propagate almost without reflection into the cavity through a highly magnetized overdense H-plasma slab filling the entrance hole. The entrapped laser light is then multiply reflected at the inner surfaces of the slab and shell plasmas, gradually losing energy to the latter. Compared to that of the incident laser, the frequency is only slightly broadened and the wave vector slightly modified by appearance of weak nearly isotropic and homogeneous higher harmonics.
Containing intense laser light in circular cavity with magnetic trap door
The classical Clarke subdifferential alone is inadequate for understanding automatic differentiation in nonsmooth contexts. Instead, we can sometimes rely on enlarged generalized gradients called "conservative fields", defined through the natural path-wise chain rule: one application is the convergence analysis of gradient-based deep learning algorithms. In the semi-algebraic case, we show that all conservative fields are in fact just Clarke subdifferentials plus normals of manifolds in underlying Whitney stratifications.
The structure of conservative gradient fields
This paper considers a traditional problem of resource allocation, scheduling jobs on machines. One such recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements and need to be immediately scheduled on physical machines in data centers. It is often observed that the requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant cost reduction for the cloud provider, while only inducing a very low risk of violating capacity constraints. We introduce and study a model that quantifies the value of overcommitment by modeling the problem as a bin packing with chance constraints. We then propose an alternative formulation that transforms each chance constraint into a submodular function. We show that our model captures the risk pooling effect and can guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate our model using realistic workload data, and test our approach in a practical setting. Our analysis and experiments illustrate the benefit of overcommitment in cloud services, and suggest a cost reduction of 1.5% to 17% depending on the provider's risk tolerance.
Overcommitment in Cloud Services -- Bin packing with Chance Constraints
Let L=d^2/dx^2+u(x) be the one-dimensional Schrodinger operator and H(x,y,t) be the corresponding heat kernel. We prove that the nth Hadamard's coefficient H_n(x,y) is equal to 0 if and only if there exists a differential operator M of order 2n-1 such that L^{2n-1}=M^2. Thus, the heat expansion is finite if and only if the potential u(x) is a rational solution of the KdV hierarchy decaying at infinity studied in [1,2]. Equivalently, one can characterize the corresponding operators L as the rank one bispectral family in [8].
Finite heat kernel expansions on the real line
Two decades after its discovery, cosmic acceleration remains the most profound mystery in cosmology and arguably in all of physics. Either the Universe is dominated by a form of dark energy with exotic physical properties not predicted by standard model physics, or General Relativity is not an adequate description of gravity over cosmic distances. WFIRST emerged as a top priority of Astro2010 in part because of its ability to address the mystery of cosmic acceleration through both high precision measurements of the cosmic expansion history and the growth of cosmic structures with multiple and redundant probes. We illustrate in this white paper how mission design changes since Astro2010 have made WFIRST an even more powerful dark energy facility and have improved the ability of WFIRST to respond to changes in the experimental landscape. WFIRST is the space-based probe of DE the community needs in the mid-2020s.
WFIRST: The Essential Cosmology Space Observatory for the Coming Decade
We use the Radial Baryon Acoustic Oscillation (RBAO) measurements, distant type Ia supernovae (SNe Ia), the observational $H(z)$ data (OHD) and the Cosmic Microwave Background (CMB) shift parameter data to constrain cosmological parameters of $\Lambda$CDM and XCDM cosmologies and further examine the role of OHD and SNe Ia data in cosmological constraints. We marginalize the likelihood function over $h$ by integrating the probability density $P\propto e^{-\chi^{2}/2}$ to obtain the best fitting results and the confidence regions in the $\Omega_{m}-\Omega_{\Lambda}$ plane.With the combination analysis for both of the {\rm $\Lambda$}CDM and XCDM models, we find that the confidence regions of 68.3%, 95.4% and 99.7% levels using OHD+RBAO+CMB data are in good agreement with that of SNe Ia+RBAO+CMB data which is consistent with the result of Lin et al's work. With more data of OHD, we can probably constrain the cosmological parameters using OHD data instead of SNe Ia data in the future.
Cosmological constraints from Radial Baryon Acoustic Oscillation measurements and Observational Hubble data
We present a static and axisymmetric traversable wormhole spacetime with vanishing Arnowitt-Deser-Misner (ADM) mass which is characterized by a length parameter $l$ and a deformation parameter $a$ and reduces to the massless Kerr vacuum wormhole as $l\to 0$. The spacetime is analytic everywhere and regularizes a ring-like conical singularity of the massless Kerr wormhole by virtue of a localized exotic matter which violates the standard energy conditions only near the wormhole throat. In the spherically symmetric case ($a=0$), the areal radius of the wormhole throat is exactly $l$ and all the standard energy conditions are respected outside the proper radial distance approximately $1.60l$ from the throat. While the curvature at the throat is beyond the Planck scale if $l$ is identical to the Planck length $l_{\rm p}$, our wormhole may be a semi-classical model for $l\simeq 10l_{\rm p}$. With $l=10l_{\rm p}$, the total amount of the negative energy supporting this wormhole is only $E\simeq -26.5m_{\rm p}c^2$, which is the rest mass energy of about $-5.77\times 10^{-4}{\rm g}$. It is shown that the geodesic behavior on the equatorial plane does not qualitatively change by the localization of an exotic matter.
Simple traversable wormholes violating energy conditions only near the Planck scale
Many chemotactic bacteria inhabit environments in which chemicals appear as localized pulses and evolve by processes such as diffusion and mixing. We show that, in such environments, physical limits on the accuracy of temporal gradient sensing govern when and where bacteria can accurately measure the cues they use to navigate. Chemical pulses are surrounded by a predictable dynamic region, outside which bacterial cells cannot resolve gradients above noise. The outer boundary of this region initially expands in proportion to $\sqrt{t}$, before rapidly contracting. Our analysis also reveals how chemokinesis - the increase in swimming speed many bacteria exhibit when absolute chemical concentration exceeds a threshold - may serve to enhance chemotactic accuracy and sensitivity when the chemical landscape is dynamic. More generally, our framework provides a rigorous method for partitioning bacteria into populations that are "near" and "far" from chemical hotspots in complex, rapidly evolving environments such as those that dominate aquatic ecosystems.
Physical Limits on Bacterial Navigation in Dynamic Environments
In this paper, we define a new total order on R^N and use this order together with backward stochastic viability property(for short BSVP) to study the property of the generator of backward stochastic differential equation(for short BSDE) when the price of contingent claim can be represented by a BSDE in the no-arbitrage financial market. The main result is the necessary and sufficient condition for comparison theorem of multidimensional BSDEs under this order.
A new comparison theorem of multidimensional BSDEs
Swimming of a sphere in a viscous incompressible fluid is studied on the basis of the Navier-Stokes equations for wave-type distortions of the spherical shape. At sizable values of the dimensionless scale number the mean swimming velocity is the result of a delicate balance between the net time-averaged flow generated directly by the surface distortions and the flow generated by the mean Reynolds force density. Depending on the stroke, this can lead to a surprising dependence of the mean swimming velocity on the kinematic viscosity of the fluid. The net flow pattern is calculated as a function of kinematic viscosity for axisymmetric strokes of the swimming sphere. The calculation covers the full range of scale number, from the friction-dominated Stokes regime in the limit of vanishing scale number to the inertia-dominated regime at large scale number. The model therefore provides paradigmatic insight into the fluid dynamics of swimming or flying of a wide range of organisms.
Effect of fluid inertia on swimming of a sphere in a viscous incompressible fluid
A first-order Liouville theorem is obtained for random ensembles of uniformly parabolic systems under the mere qualitative assumptions of stationarity and ergodicity. Furthermore, the paper establishes, almost surely, an intrinsic large-scale $C^{1,\alpha}$-regularity estimate for caloric functions.
A Liouville theorem for stationary and ergodic ensembles of parabolic systems
We use a toy model to discuss the problem of parameterizing the possible contribution of a light scalar meson, sigma, to the final state interactions in the non leptonic decays of heavy mesons.
Isobar rescattering model and light scalar mesons
HD 98088 is a synchronised, double-lined spectroscopic binary system with a magnetic Ap primary component and an Am secondary component. We study this rare system using high-resolution MuSiCoS spectropolarimetric data, to gain insight into the effect of binarity on the origin of stellar magnetism and the formation of chemical peculiarities in A-type stars. Using a new collection of 29 high-resolution Stokes VQU spectra we re-derive the orbital and stellar physical parameters and conduct the first disentangling of spectroscopic observations of the system to conduct spectral analysis of the individual stellar components. From this analysis we determine the projected rotational velocities of the stars and conduct a detailed chemical abundance analysis of each component using both the SYNTH3 and ZEEMAN spectrum synthesis codes. The surface abundances of the primary component are typical of a cool Ap star, while those of the secondary component are typical of an Am star. We present the first magnetic analysis of both components using modern data. Using Least-Squares Deconvolution, we extract the longitudinal magnetic field strength of the primary component, which is observed to vary between +1170 and -920 G with a period consistent with the orbital period. There is no field detected in the secondary component. The magnetic field in the primary is predominantly dipolar, with the positive pole oriented approximately towards the secondary.
Orbital parameters, chemical composition, and magnetic field of the Ap binary HD 98088
Let $\| \cdot \|$ be the euclidean norm on ${\bf R}^n$ and $\gamma_n$ the (standard) Gaussian measure on ${\bf R}^n$ with density $(2 \pi )^{-n/2} e^{- \| x\|^2 /2}$. Let $\vartheta$ ($ \simeq 1.3489795$) be defined by $\gamma_1 ([ - \vartheta /2, \vartheta /2]) = 1/2$ and let $L$ be a lattice in ${\bf R}^n$ generated by vectors of norm $\leq \vartheta$. Then, for any closed convex set $V$ in ${\bf R}^n$ with $\gamma_n (V) \geq \frac{1}{2}$ and for any $a \in {\bf R}^n$, $(a +L) \cap V \neq \phi$. The above statement can be viewed as a ``nonsymmetric'' version of Minkowski Theorem.
Lattice coverings and gaussian measures of n-dimensional convex bodies
The star-formation rates and the stellar masses of the host galaxies of AGNs at high-redshifts are keys to understanding the evolution of the relation between the mass of the spheroidal component of a galaxy and the mass of its central black hole. We investigate the host galaxies of 31 AGNs with spectroscopic redshifts between 2 and 4 found in the deep Chandra surveys of the GOODS fields with the ACS images. The sample can be divided into 17 ``extended'' AGNs and 14 ``compact'' AGNs based on the concentration parameter defined as the difference between the aperture magnitudes with 0.07" and 0.25" diameter. We derive upper limits of the UV luminosities of the host galaxies of the ``compact'' AGN sample, and upper and lower limits of the UV luminosities of the host galaxies of the ``extended'' AGN sample.These limits are consistent with the knee of the luminosity function of the Lyman Break Galaxies (LBGs) at z=3, suggesting moderate star-formation rates, less than 40 Msolar/yr, in the host galaxies of the AGNs at 2<z_sp<4 without correcting the dust extinction. By combining the limits of the UV luminosities with the observed K-band magnitudes for the ``extended'' AGNs, we derive the upper and lower limits of the stellar masses of their host galaxies. The derived upper limits on the stellar masses range from 10^10 Msolar to 10^12 Msolar. The upper limits imply that the Mbulge-MBH relation of the high-redshift AGNs is different from that of the galaxies in the nearby universe or the average Eddington ratio of the high-redshift AGNs is higher than that of low-redshift AGNs with lower-luminosity.
Host Galaxies of the High-redshift AGNs in the GOODS Fields
Following a recent method introduced by C. Hainzl, J.P. Solovej and the second author of this article, we prove the existence of the thermodynamic limit for a system made of quantum electrons, and classical nuclei whose positions and charges are randomly perturbed in an ergodic fashion. All the particles interact through Coulomb forces.
Existence of the thermodynamic limit for disordered quantum Coulomb systems
We study heat transport and thermoelectric effects in two-dimensional superconductors in a magnetic field. These are modeled as granular Josephson-junction arrays, forming either regular or random lattices. We employ two different models for the dynamics, relaxational model-A dynamics or resistively and capacitively shunted Josephson junction (RCSJ) dynamics. We derive expressions for the heat current in these models, which are then used in numerical simulations to calculate the heat conductivity and the Nernst coefficient for different temperatures and magnetic fields. At low temperatures and zero magnetic field the heat conductivity in the RCSJ model is calculated analytically from a spin wave approximation, and is seen to have an anomalous logarithmic dependence on the system size, and also to diverge in the completely overdamped limit C -> 0. From our simulations we find at low magnetic fields that the Nernst signal displays a characteristic "tilted hill" profile similar to experiments and a non-monotonic temperature dependence of the heat conductivity. We also investigate the effects of granularity and randomness, which become important for higher magnetic fields. In this regime geometric frustration strongly influences the results in both regular and random systems and leads to highly non-trivial magnetic field dependencies of the studied transport coefficients.
Influence of vortices and phase fluctuations on thermoelectric transport properties of superconductors in a magnetic field
In this work, we have investigated a 2D model of band-to-band tunneling based on 2-band model and implemented it using 2D NEGF formalism. Being 2D in nature, this model better addresses the variation in the directionality of the tunneling process occurring in most practical TFET device structures. It also works as a compromise between semi-classical and multiband quantum simulation of TFETs. In this work, we have presented a sound step by step mathematical development of the numerical model. We have also discussed how this model can be implemented in simulators and pointed out a few optimizations that can be made to reduce complexity and to save time. Finally, we have performed elaborate simulations for a practical TFET design and compared the results with commercially available TCAD simulations, to point out the limitations of the simplistic models that are frequently used, and how our model overcomes these limitations.
An Improved Physics Based Numerical Model of Tunnel FET Using 2D NEGF Formalism
We aim to improve our picture of the low chromosphere in the quiet-Sun internetwork by investigating the intensity, horizontal velocity, size and lifetime variations of small bright points (BPs; diameter smaller than 0.3 arcsec) observed in the Ca II H 3968 {\AA} passband along with their magnetic field parameters, derived from photospheric magnetograms. Several high-quality time series of disc-centre, quiet-Sun observations from the Sunrise balloon-borne solar telescope, with spatial resolution of around 100 km on the solar surface, have been analysed to study the dynamics of BPs observed in the Ca II H passband and their dependence on the photospheric vector magnetogram signal. Parameters such as horizontal velocity, diameter, intensity and lifetime histograms of the isolated internetwork and magnetic Ca II H BPs were determined. Mean values were found to be 2.2 km/s, 0.2 arcsec (150 km), 1.48 average Ca II H quiet-Sun and 673 sec, respectively. Interestingly, the brightness and the horizontal velocity of BPs are anti-correlated. Large excursions (pulses) in horizontal velocity, up to 15 km/s, are present in the trajectories of most BPs. These could excite kink waves travelling into the chromosphere and possibly the corona, which we estimate to carry an energy flux of 310 W/m^2, sufficient to heat the upper layers, although only marginally. The stable observing conditions of Sunrise and our technique for identifying and tracking BPs have allowed us to determine reliable parameters of these features in the internetwork. Thus we find, e.g., that they are considerably longer lived than previously thought. The large velocities are also reliable, and may excite kink waves. Although these wave are (marginally) energetic enough to heat the quiet corona, we expect a large additional contribution from larger magnetic elements populating the network and partly also the internetwork.
Structure and Dynamics of Isolated Internetwork Ca II H Bright Points Observed by Sunrise
The direct imaging from the ground of extrasolar planets has become today a major astronomical and biological focus. This kind of imaging requires simultaneously the use of a dedicated high performance Adaptive Optics [AO] system and a differential imaging camera in order to cancel out the flux coming from the star. In addition, the use of sophisticated post-processing techniques is mandatory to achieve the ultimate detection performance required. In the framework of the SPHERE project, we present here the development of a new technique, based on Maximum A Posteriori [MAP] approach, able to estimate parameters of a faint companion in the vicinity of a bright star, using the multi-wavelength images, the AO closed-loop data as well as some knowledge on non-common path and differential aberrations. Simulation results show a 10^-5 detectivity at 5sigma for angular separation around 15lambda/D with only two images.
Post processing of differential images for direct extrasolar planet detection from the ground
Mitigating the risk arising from extreme events is a fundamental goal with many applications, such as the modelling of natural disasters, financial crashes, epidemics, and many others. To manage this risk, a vital step is to be able to understand or generate a wide range of extreme scenarios. Existing approaches based on Generative Adversarial Networks (GANs) excel at generating realistic samples, but seek to generate typical samples, rather than extreme samples. Hence, in this work, we propose ExGAN, a GAN-based approach to generate realistic and extreme samples. To model the extremes of the training distribution in a principled way, our work draws from Extreme Value Theory (EVT), a probabilistic approach for modelling the extreme tails of distributions. For practical utility, our framework allows the user to specify both the desired extremeness measure, as well as the desired extremeness probability they wish to sample at. Experiments on real US Precipitation data show that our method generates realistic samples, based on visual inspection and quantitative measures, in an efficient manner. Moreover, generating increasingly extreme examples using ExGAN can be done in constant time (with respect to the extremeness probability $\tau$), as opposed to the $\mathcal{O}(\frac{1}{\tau})$ time required by the baseline approach.
ExGAN: Adversarial Generation of Extreme Samples
Charge density wave (CDW) correlations have recently been shown to universally exist in cuprate superconductors. However, their nature at high fields inferred from nuclear magnetic resonance is distinct from that measured by x-ray scattering at zero and low fields. Here we combine a pulsed magnet with an x-ray free electron laser to characterize the CDW in YBa2Cu3O6.67 via x-ray scattering in fields up to 28 Tesla. While the zero-field CDW order, which develops below T ~ 150 K, is essentially two-dimensional, at lower temperature and beyond 15 Tesla, another three-dimensionally ordered CDW emerges. The field-induced CDW onsets around the zero-field superconducting transition temperature, yet the incommensurate in-plane ordering vector is field-independent. This implies that the two forms of CDW and high-temperature superconductivity are intimately linked.
Three-Dimensional Charge Density Wave Order in YBa2Cu3O6.67 at High Magnetic Fields
The (heterotic) double field theories and the exceptional field theories are recently developed for manifestly duality covariant formulation of various supergravity theories, describing low-energy limit of various (heterotic) superstring and M-theory compactifications. These field theories are known to be reduced to the standard descriptions by introducing appropriately parameterized generalized metric and by applying suitably chosen section conditions. We generalize this development to non-geometric backgrounds by utilizing dual fields pertinent to non-geometric fluxes. We introduce different parameterizations for the generalized metric, in terms of the conventional supergravity fields or the dual fields. Under certain simplifying assumptions, we construct new effective action for non-geometric backgrounds. We then obtain the non-geometric backgrounds sourced by exotic branes. From them, we construct their $U$-duality monodromy matrices. The charge of exotic branes obtained from these monodromy matrices agrees perfectly with the charge obtained from the non-geometric flux integral.
Effective Action for Non-Geometric Fluxes from Duality Covariant Actions
Smart power grids are one of the most complex cyber-physical systems, delivering electricity from power generation stations to consumers. It is critically important to know exactly the current state of the system as well as its state variation tendency; consequently, state estimation and state forecasting are widely used in smart power grids. Given that state forecasting predicts the system state ahead of time, it can enhance state estimation because state estimation is highly sensitive to measurement corruption due to the bad data or communication failures. In this paper, a hybrid deep learningbased method is proposed for power system state forecasting. The proposed method leverages Convolutional Neural Network (CNN) for predicting voltage magnitudes and a Deep Recurrent Neural Network (RNN) for predicting phase angels. The proposed CNN-RNN model is evaluated on the IEEE 118-bus benchmark. The results demonstrate that the proposed CNNRNN model achieves better results than the existing techniques in the literature by reducing the normalized Root Mean Squared Error (RMSE) of predicted voltages by 10%. The results also show a 65% and 35% decrease in the average and maximum absolute error of voltage magnitude forecasting.
A Hybrid Deep Learning-Based State Forecasting Method for Smart Power Grids
As language models (LMs) become increasingly powerful, it is important to quantify and compare them for sociodemographic bias with potential for harm. Prior bias measurement datasets are sensitive to perturbations in their manually designed templates, therefore unreliable. To achieve reliability, we introduce the Comprehensive Assessment of Language Model bias (CALM), a benchmark dataset to quantify bias in LMs across three tasks. We integrate 16 existing datasets across different domains, such as Wikipedia and news articles, to filter 224 templates from which we construct a dataset of 78,400 examples. We compare the diversity of CALM with prior datasets on metrics such as average semantic similarity, and variation in template length, and test the sensitivity to small perturbations. We show that our dataset is more diverse and reliable than previous datasets, thus better capture the breadth of linguistic variation required to reliably evaluate model bias. We evaluate 20 large language models including six prominent families of LMs such as Llama-2. In two LM series, OPT and Bloom, we found that larger parameter models are more biased than lower parameter models. We found the T0 series of models to be the least biased. Furthermore, we noticed a tradeoff between gender and racial bias with increasing model size in some model series. The code is available at https://github.com/vipulgupta1011/CALM.
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
Alphanumeric authentication routinely fails to regulate access to resources with the required stringency, primarily due to usability issues. Initial deployment did not reveal the problems of passwords, deep and profound flaws only emerged once passwords were deployed in the wild. The need for a replacement is widely acknowledged yet despite over a decade of research into knowledge-based alternatives, few, if any, have been adopted by industry. Alternatives are unconvincing for three primary reasons. The first is that alternatives are rarely investigated beyond the initial proposal, with only the results from a constrained lab test provided to convince adopters of their viability. The second is that alternatives are seldom tested realistically where the authenticator mediates access to something of value. The third is that the testing rarely varies the device or context beyond that initially targeted. In the modern world different devices are used across a variety of contexts. What works well in one context may easily fail in another. Consequently, the contribution of this paper is an "in the wild" evaluation of an alternative authentication mechanism that had demonstrated promise in its lab evaluation. In the field test the mechanism was deployed to actual users to regulate access to an application in a context beyond that initially proposed. The performance of the mechanism is reported and discussed. We conclude by reflecting on the value of field evaluations of alternative authentication mechanisms.
Alternative Authentication in the Wild
We study a discrete-time model where each packet has a cost of not being sent -- this cost might depend on the packet content. We study the tradeoff between the age and the cost where the sender is confined to packet-based strategies. The optimal tradeoff is found by an appropriate formulation of the problem as a Markov Decision Process (MDP). We show that the optimal tradeoff can be attained with finite-memory policies and we devise an efficient policy iteration algorithm to find these optimal policies. We further study a related problem where the transmitted packets are subject to erasures. We show that the optimal policies for our problem are also optimal for this new setup. Allowing coding across packets significantly extends the packet-based strategies. We show that when the packet payloads are small, the performance can be improved by coding.
Optimal Policies for Age and Distortion in a Discrete-Time Model
In this work we explore the intriguing connections between searches for long-lived particles (LLPs) at the LHC and early universe cosmology. We study the non-thermal production of ultra-relativistic particles (i.e. dark radiation) in the early universe via the decay of weak-scale LLPs and show that the cosmologically interesting range $\Delta N_\text{eff} \sim 0.01-0.1$ corresponds to LLP decay lengths in the mm to cm range. These decay lengths lie at the boundary between prompt and displaced signatures at the LHC and can be comprehensively explored by combining searches for both. To illustrate this point, we consider a scenario where the LLP decays into a charged lepton and a (nearly) massless invisible particle. By reinterpreting searches for promptly decaying sleptons and for displaced leptons at both ATLAS and CMS we can then directly compare LHC exclusions with cosmological observables. We find that the CMB-S4 target value of $\Delta N_\text{eff}=0.06$ is already excluded by current LHC searches and even smaller values can be probed for LLP masses at the electroweak scale.
Searching for dark radiation at the LHC
Firstly, we study the final masses of giant planets growing in protoplanetary disks through capture of disk gas, by employing an empirical formula for the gas capture rate and a shallow disk gap model, which are both based on hydrodynamical simulations. The shallow disk gaps cannot terminate growth of giant planets. For planets less massive than 10 Jupiter masses, their growth rates are mainly controlled by the gas supply through the global disk accretion, rather than their gaps. The insufficient gas supply compared with the rapid gas capture causes a depletion of the gas surface density even at the outside of the gap, which can create an inner hole in the protoplanetary disk. Our model can also predict the depleted gas surface density in the inner hole for a given planet mass. Secondly, our findings are applied to the formation of our solar system. For the formation of Jupiter, a very low-mass gas disk with a few or several Jupiter masses is required at the beginning of its gas capture because of the non-stopping capture. Such a low-mass gas disk with sufficient solid material can be formed through viscous evolution from an initially $\sim$10AU-sized compact disk with the solar composition. By the viscous evolution with a moderate viscosity of $\alpha \sim 10^{-3}$, most of disk gas accretes onto the sun and a widely spread low-mass gas disk remains when the solid core of Jupiter starts gas capture at $t \sim 10^7$ yrs. The depletion of the disk gas is suitable for explaining the high metallicity in giant planets of our solar system. A very low-mass gas disk also provides a plausible path where type I and II planetary migrations are both suppressed significantly. In particular, we also show that the type II migration of Jupiter-size planets becomes inefficient because of the additional gas depletion due to the rapid gas capture by themselves.
Final Masses of Giant Planets II: Jupiter Formation in a Gas-Depleted Disk
(Abridged) We have used the VLA to search for neutral atomic hydrogen in the circumstellar envelopes of five AGB stars. We have detected HI 21-cm emission coincident in both position and velocity with the semi-regular variable RS Cnc. The emission comprises a compact, slightly elongated feature centered on the star with a mean diameter ~82'' (1.5e17 cm), plus an additional filament extending ~6' to the NW. This filament suggests that a portion of the mass loss is highly asymmetric. We estimate MHI=1.5e-3 Msun and M_dot~1.7e-7 Msun/yr. Toward R Cas, we detect weak emission that peaks at the stellar systemic velocity and overlaps with the location of its circumstellar dust shell and thus is probably related to the star. In the case of IRC+10216, we were unable to confirm the detection of HI in absorption against the cosmic background previously reported by Le Bertre & Gerard. However, we detect arcs of emission at projected distances of r~14'-18' (~2e18 cm) to the NW. A large separation of the emission from the star is plausible given its advanced evolutionary status, although it is unclear if the asymmetric distribution and complex velocity structure are consistent with a circumstellar origin. For EP Aqr, we detected HI emission comprising multiple clumps redward of the systemic velocity, but we are unable to determine unambiguously whether the emission arises from the circumstellar envelope or from interstellar clouds along the line-of-sight. Regardless of the adopted distance for the clumps, their inferred HI masses are at least an order of magnitude smaller than their individual binding masses. We detected our fifth target, R Aqr (a symbiotic binary), in the 1.4 GHz continuum, but did not detect any HI emission from the system.
VLA Observations of HI in the Circumstellar Envelopes of Asymptotic Giant Branch Stars
The azimuthal correlations of D mesons and charged particles were measured with the ALICE detector in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV at the Large Hadron Collider. D$^0$, D$^+$, and D$^{*+}$ mesons and their charge conjugates with transverse momentum $3<p_{\rm T}<16$ GeV/$c$ and rapidity in the nucleon-nucleon centre-of-mass system $|y_{\rm cms}|<0.5$ (pp collisions) and $-0.96<y_{\rm cms}<0.04$ (p-Pb collisions) were correlated to charged particles with $p_{\rm T}>0.3$ Gev/$c$. The properties of the correlation peak induced by the jet containing the D meson, described in terms of the yield of charged particles in the peak and peak width, are compatible within uncertainties between the two collision systems, and described by Monte-Carlo simulations based on the PYTHIA, POWHEG and EPOS 3 event generators.
Measurement of azimuthal correlations of D mesons and charged particles in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
We present a potential-density pair designed to model nearly isothermal star clusters (and similar self-gravitating systems) with a central core and an outer turnover radius, beyond which density falls off as $r^{-4}$. In the intermediate zone, the profile is similar to that of an isothermal sphere (density $\rho \propto r^{-2}$), somewhat less steep than the King 62 profile, and with the advantage that many dynamical quantities can be written in a simple closed form. We derive new analytic expressions for the cluster binding energy and velocity dispersion, and apply these to create toy models for cluster core collapse and evaporation. We fit our projected surface brightness profiles to observed globular and open clusters, and find that the quality of the fit is generally at least as good as that for the surface brightness profiles of King 62. This model can be used for convenient computation of the dynamics and evolution of globular and nuclear star clusters.
A Dynamical Potential-Density Pair for Star Clusters With Nearly Isothermal Interiors
Recently, convolutional neural network (CNN) has demonstrated significant success for image restoration (IR) tasks (e.g., image super-resolution, image deblurring, rain streak removal, and dehazing). However, existing CNN based models are commonly implemented as a single-path stream to enrich feature representations from low-quality (LQ) input space for final predictions, which fail to fully incorporate preceding low-level contexts into later high-level features within networks, thereby producing inferior results. In this paper, we present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction. The proposed DIN follows a multi-path and multi-branch pattern allowing multiple interconnected branches to interleave and fuse at different states. In this way, the shallow information can guide deep representative features prediction to enhance the feature expression ability. Furthermore, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies. Such AsyCA can not only adaptively emphasize the informative features from different states, but also improves the discriminative ability of networks. Our presented DIN can be trained end-to-end and applied to various IR tasks. Comprehensive evaluations on public benchmarks and real-world datasets demonstrate that the proposed DIN perform favorably against the state-of-the-art methods quantitatively and qualitatively.
Learning Deep Interleaved Networks with Asymmetric Co-Attention for Image Restoration
We solve the problem of expressing the Weyl scalars $\psi $ that describe gravitational perturbations of a Kerr black hole in terms of Cauchy data. To do so we use geometrical identities (like the Gauss-Codazzi relations) as well as Einstein equations. We are able to explicitly express $\psi $ and $\partial _t\psi $ as functions only of the extrinsic curvature and the three-metric (and geometrical objects built out of it) of a generic spacelike slice of the spacetime. These results provide the link between initial data and $\psi $ to be evolved by the Teukolsky equation, and can be used to compute the gravitational radiation generated by two orbiting black holes in the close limit approximation. They can also be used to extract waveforms from spacetimes completely generated by numerical methods.
The imposition of Cauchy data to the Teukolsky equation III: The rotating case
Recent advances in classical machine learning have shown that creating models with inductive biases encoding the symmetries of a problem can greatly improve performance. Importation of these ideas, combined with an existing rich body of work at the nexus of quantum theory and symmetry, has given rise to the field of Geometric Quantum Machine Learning (GQML). Following the success of its classical counterpart, it is reasonable to expect that GQML will play a crucial role in developing problem-specific and quantum-aware models capable of achieving a computational advantage. Despite the simplicity of the main idea of GQML -- create architectures respecting the symmetries of the data -- its practical implementation requires a significant amount of knowledge of group representation theory. We present an introduction to representation theory tools from the optics of quantum learning, driven by key examples involving discrete and continuous groups. These examples are sewn together by an exposition outlining the formal capture of GQML symmetries via "label invariance under the action of a group representation", a brief (but rigorous) tour through finite and compact Lie group representation theory, a reexamination of ubiquitous tools like Haar integration and twirling, and an overview of some successful strategies for detecting symmetries.
Representation Theory for Geometric Quantum Machine Learning
We investigate the many-body properties of a two-dimensional electron gas constrained to the surface of a sphere, a system which is physically realized in multielectron bubbles in liquid helium. A second-quantization formalism, suited for the treatment of a spherical two-dimensional electron gas (S2DEG), is introduced. Within this formalism, the dielectric response properties of the S2DEG are derived, and we identify both collective excitations and a spectrum of single-particle excitations. We find that the single-particle excitations are constrained to a well-defined region in the angular momentum - energy plane. The collective excitations differ in two important aspects from those of a flat 2DEG: on a sphere, the 'spherical plasmons' have a discrete frequency spectrum and the lowest frequency is nonzero.
On the Spherical Two-Dimensional Electron Gas
We introduce a code generator that converts unoptimized C++ code operating on sparse data into vectorized and parallel CPU or GPU kernels. Our approach unrolls the computation into a massive expression graph, performs redundant expression elimination, grouping, and then generates an architecture-specific kernel to solve the same problem, assuming that the sparsity pattern is fixed, which is a common scenario in many applications in computer graphics and scientific computing. We show that our approach scales to large problems and can achieve speedups of two orders of magnitude on CPUs and three orders of magnitude on GPUs, compared to a set of manually optimized CPU baselines. To demonstrate the practical applicability of our approach, we employ it to optimize popular algorithms with applications to physical simulation and interactive mesh deformation.
Sparsity-Specific Code Optimization using Expression Trees
A coloring of the $\ell$-dimensional faces of $Q_n$ is called $d$-polychromatic if every embedded $Q_d$ has every color on at least one face. Denote by $p^\ell(d)$ the maximum number of colors such that any $Q_n$ can be colored in this way. We provide a new lower bound on $p^\ell(d)$ for $\ell > 1$.
Linear polychromatic colorings of hypercube faces
The development of renewable energy generation empowers microgrids to generate electricity to supply itself and to trade the surplus on energy markets. To minimize the overall cost, a microgrid must determine how to schedule its energy resources and electrical loads and how to trade with others. The control decisions are influenced by various factors, such as energy storage, renewable energy yield, electrical load, and competition from other microgrids. Making the optimal control decision is challenging, due to the complexity of the interconnected microgrids, the uncertainty of renewable energy generation and consumption, and the interplay among microgrids. The previous works mainly adopted the modeling-based approaches for deriving the control decision, yet they relied on the precise information of future system dynamics, which can be hard to obtain in a complex environment. This work provides a new perspective of obtaining the optimal control policy for distributed energy trading and scheduling by directly interacting with the environment, and proposes a multiagent deep reinforcement learning approach for learning the optimal control policy. Each microgrid is modeled as an agent, and different agents learn collaboratively for maximizing their rewards. The agent of each microgrid can make the local scheduling decision without knowing others' information, which can well maintain the autonomy of each microgrid. We evaluate the performances of our proposed method using real-world datasets. The experimental results show that our method can significantly reduce the cost of the microgrids compared with the baseline methods.
Distributed Energy Trading and Scheduling among Microgrids via Multiagent Reinforcement Learning
We consider an inverse source two-parameter sub-diffusion model subject to a nonlocal initial condition. The problem models several physical processes, among them are the microwave heating and light propagation in photoelectric cells. A bi-orthogonal pair of bases is employed to construct a series representation of the solution and a Volterra integral equation for the source term. We develop a numerical algorithm for approximating the unknown time-dependent source term. Due to the singularity of the solution near $t=0$, a graded mesh is used to improve the convergence rate. Numerical experiments are provided to illustrate the expected analytical order of convergence.
Inverse source in two-parameter anomalous diffusion, numerical algorithms and simulations over graded time-meshes
Fe3-xGeTe2 is an itinerant ferromagnet composed of two-dimensional layers weakly connected by van der Waals bonding that shows a variety of intriguing phenomena. Inelastic neutron scattering measurements on bulk single crystals of Fe2.75GeTe2 were performed to quantify the magnetic exchange interaction energies and anisotropy. The observed inelastic excitations are indicative of dominant in-plane correlations with negligible magnetic interactions between the layers. A spin-gap of 3.9 meV is observed allowing a measure of the magnetic anisotropy. As the excitations disperse to their maximum energy (~65 meV) they become highly damped, reflective of both the magnetic site occupancy reduction of 25{\%} on one Fe sublattice and the itinerant interactions. A minimal model is employed to describe the excitation spectra and extract nearest neighbor magnetic exchange interaction values. The temperature evolution of the excitations are probed and correlations shown to persist above Tc, indicative of low dimensional magnetism.
Magnetic excitations in the quasi-2D ferromagnet Fe3-xGeTe2 measured with inelastic neutron scattering
We establish a strong-weak coupling duality between two types of free matrix models. In the large-N limit, the real-symmetric matrix model is dual to the quaternionic-real matrix model. Using the large-N conformal invariant collective field formulation, the duality is displayed in terms of the generators of the conformal group. The conformally invariant master Hamiltonian is constructed and we conjecture that the master Hamiltonian corresponds to the hermitian matrix model.
Matrix-model dualities in the collective field formulation
Many radio pulsars have stable pulse profiles, but some exhibit mode changing where the profile switches between two or more quasi-stable modes of emission. So far, these effects had only been seen in relatively slow pulsars, but we show here that the pulse profile of PSR B1957+20, a millisecond pulsar, switches between two modes, with a typical time between mode changes of only $1.7$ s (or $\sim\!1000$ rotations), the shortest observed so far. The two modes differ in both intensity and polarization, with relatively large differences in the interpulse and much more modest ones in the main pulse. We find that the changes in the interpulse precede those in the main pulse by $\sim\!25$ ms, placing an empirical constraint on the timescale over which mode changes occurs. We also find that the properties of the giant pulses emitted by PSR B1957+20 are correlated with the mode of the regular emission: their rate and the rotational phase at which they are emitted both depend on mode. Furthermore, the energy distribution of the giant pulses emitted near the main pulse depends on mode as well. We discuss the ramifications for our understanding of the radio emission mechanisms as well as for pulsar timing experiments.
Mode changing and giant pulses in the millisecond pulsar PSR B1957+20
Heisenberg Hamiltonian was employed to describe the variation of energy of thick ferromagnetic films with second and fourth order anisotropies. At angle of 1.36 degrees and anisotropies of 1.25, energy is minimum for thick film of sc(001) with 1000 layers. Energy becomes minimum at angle of 1.18 degrees and fourth order anisotropy of 1.15 for thick film of bcc(001) with the same thickness. According to these simulations, these lattices can be easily oriented in some certain directions under the influence of some particular values of anisotropies. Energy varies with second and fourth order anisotropies in similar passion for both types of lattices. The energy gradually decreases with second and fourth order anisotropy for both types of lattices in the range described here.
Magnetic anisotropy dependence of the energy of oriented thick ferromagnetic films
This paper presents an algorithm for estimating the weight of a maximum weighted matching by augmenting any estimation routine for the size of an unweighted matching. The algorithm is implementable in any streaming model including dynamic graph streams. We also give the first constant estimation for the maximum matching size in a dynamic graph stream for planar graphs (or any graph with bounded arboricity) using $\tilde{O}(n^{4/5})$ space which also extends to weighted matching. Using previous results by Kapralov, Khanna, and Sudan (2014) we obtain a $\mathrm{polylog}(n)$ approximation for general graphs using $\mathrm{polylog}(n)$ space in random order streams, respectively. In addition, we give a space lower bound of $\Omega(n^{1-\varepsilon})$ for any randomized algorithm estimating the size of a maximum matching up to a $1+O(\varepsilon)$ factor for adversarial streams.
Sublinear Estimation of Weighted Matchings in Dynamic Data Streams
Considering the coupling of color $1 \otimes 1$ and $8 \otimes 8$ structures, we calculate the energy of the newly observed $X$(3915) as $S-$wave $D^*\bar{D^*}$ state in the Bhaduri, Cohler, and Nogami quark model by the Gaussian Expansion Method. Due to the color coupling, the bound state of $D^*\bar{D^*}$ with $J^{PC}=0^{++}$ is found, which is well consonant with the experimental data of the $X$(3915). The bound state of $B^*\bar{B^*}$ with $J^{PC}=0^{++}$ and $2^{++}$ are also predicted in this work.
Dynamical study of the $X$(3915) as a molecular $D^*\bar{D^*}$ state in a quark model
We consider the six-vertex model with Domain Wall Boundary Conditions. Our main interest is the study of the fluctuations of the extremal lattice path about the arctic curves. We address the problem through Monte Carlo simulations. At $\Delta = 0$, the fluctuations of the extremal path along any line parallel to the square diagonal were rigorously proven to follow the Tracy-Widom distribution. We provide strong numerical evidence that this is true also for other values of the anisotropy parameter $\Delta$ ($0\leq \Delta < 1$). We argue that the typical width of the fluctuations of the extremal path about the arctic curves scales as $N^{1/3}$ and provide a numerical estimate for the parameters of the scaling random variable.
Fluctuation of the phase boundary in the six-vertex model with Domain Wall Boundary Conditions: a Monte Carlo study
Many transformations in deep learning architectures are sparsely connected. When such transformations cannot be designed by hand, they can be learned, even through plain backpropagation, for instance in attention mechanisms. However, during learning, such sparse structures are often represented in a dense form, as we do not know beforehand which elements will eventually become non-zero. We introduce the adaptive, sparse hyperlayer, a method for learning a sparse transformation, paramatrized sparsely: as index-tuples with associated values. To overcome the lack of gradients from such a discrete structure, we introduce a method of randomly sampling connections, and backpropagating over the randomly wired computation graph. To show that this approach allows us to train a model to competitive performance on real data, we use it to build two architectures. First, an attention mechanism for visual classification. Second, we implement a method for differentiable sorting: specifically, learning to sort unlabeled MNIST digits, given only the correct order.
Learning sparse transformations through backpropagation
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data to learn useful semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image from context, or predicting missing words in text; yet predicting this \textit{known} information helps in learning representations effective for downstream prediction tasks. We posit a mechanism exploiting the statistical connections between certain {\em reconstruction-based} pretext tasks that guarantee to learn a good representation. Formally, we quantify how the approximate independence between the components of the pretext task (conditional on the label and latent variables) allows us to learn representations that can solve the downstream task by just training a linear layer on top of the learned representation. We prove the linear layer yields small approximation error even for complex ground truth function class and will drastically reduce labeled sample complexity. Next, we show a simple modification of our method leads to nonlinear CCA, analogous to the popular SimSiam algorithm, and show similar guarantees for nonlinear CCA.
Predicting What You Already Know Helps: Provable Self-Supervised Learning
We have obtained high-resolution spectra and carried out a detailed elemental abundance analysis for a new sample of 899 F and G dwarf stars in the Solar neighbourhood. The results allow us to, in a multi-dimensional space consisting of stellar ages, detailed elemental abundances, and full kinematic information for the stars, study and trace their respective origins. Here we briefly address selection criteria and discuss how to define a thick disc star. The results are discussed in the context of galaxy formation.
The Galactic thin and thick discs in the context of galaxy formation
This paper presents a critical review of particle production in an uniform electric field and Schwarzchild-like spacetimes. Both problems can be reduced to solving an effective one-dimensional Schrodinger equation with a potential barrier. In the electric field case, the potential is that of an inverted oscillator -x^2 while in the case of Schwarchild-like spacetimes, the potential is of the form -1/x^2 near the horizon. The transmission and reflection coefficients can easily be obtained for both potentials. To describe particle production, these coefficients have to be suitably interpreted. In the case of the electric field, the standard Bogoliubov coefficients can be identified and the standard gauge invariant result is recovered. However, for Schwarzchild-like spacetimes, such a tunnelling interpretation appears to be invalid. The Bogoliubov coefficients cannot be determined by using an identification process similar to that invoked in the case of the electric field. The reason for such a discrepancy appears to be that, in the tunnelling method, the effective potential near the horizon is singular and symmetric. We also provide a new and simple semi-classical method of obtaining Hawking's result in the (t,r) co-ordinate system of the usual standard Schwarzchild metric. We give a prescription whereby the singularity at the horizon can be regularised with Hawking's result being recovered. This regularisation prescription contains a fundamental asymmetry that renders both sides of the horizon dissimilar. Finally, we attempt to interpret particle production by the electric field as a tunnelling process between the two sectors of the Rindler metric.
Facets of Tunneling: Particle production in external fields
Forwarding table verification consists in checking the distributed data-structure resulting from the forwarding tables of a network. A classical concern is the detection of loops. We study this problem in the context of software-defined networking (SDN) where forwarding rules can be arbitrary bitmasks (generalizing prefix matching) and where tables are updated by a centralized controller. Basic verification problems such as loop detection are NP-hard and most previous work solves them with heuristics or SAT solvers. We follow a different approach based on computing a representation of the header classes, i.e. the sets of headers that match the same rules. This representation consists in a collection of representative header sets, at least one for each class, and can be computed centrally in time which is polynomial in the number of classes. Classical verification tasks can then be trivially solved by checking each representative header set. In general, the number of header classes can increase exponentially with header length, but it remains polynomial in the number of rules in the practical case where rules are constituted with predefined fields where exact, prefix matching or range matching is applied in each field (e.g., IP/MAC addresses, TCP/UDP ports). We propose general techniques that work in polynomial time as long as the number of classes of headers is polynomial and that do not make specific assumptions about the structure of the sets associated to rules. The efficiency of our method rely on the fact that the data-structure representing rules allows efficient computation of intersection, cardinal and inclusion. Finally, we propose an algorithm to maintain such representation in presence of updates (i.e., rule insert/update/removal). We also provide a local distributed algorithm for checking the absence of black-holes and a proof labeling scheme for locally checking the absence of loops.
Forwarding Tables Verification through Representative Header Sets
Improving the film quality in the synthesis of large-area hexagonal boron nitride films (h-BN) for two-dimensional material devices remains a great challenge. The measurement of electrical breakdown dielectric strength (EBD) is one of the most important methods to elucidate the insulating quality of h-BN. In this work, the EBD of high quality exfoliated single-crystal h-BN was investigated using three different electrode structures under different environmental conditions to determine the ideal electrode structure and environment for EBD measurement. A systematic investigation revealed that EBD is not sensitive to contact force or electrode area but strongly depends on the relative humidity during measurement. Once the measurement environment is properly managed, it was found that the EBD values are consistent within experimental error regardless of the electrode structure, which enables the evaluation of the crystallinity of synthesized h-BN at the microscopic and macroscopic level by utilizing the three different electrode structures properly for different purposes.
Comparison of device structures for the dielectric breakdown measurement of hexagonal boron nitride
I have proposed a measure for the cage effect in glass forming systems. A binary mixture of hard disks is numerically studied as a model glass former. A network is constructed on the basis of the colliding pairs of disks. A rigidity matrix is formed from the isostatic (rigid) sub--network, corresponding to a cage. The determinant of the matrix changes its sign when an uncaging event occurs. Time evolution of the number of the uncaging events is determined numerically. I have found that there is a gap in the uncaging timescales between the cages involving different numbers of disks. Caging of one disk by two neighboring disks sustains for a longer time as compared with other cages involving more than one disk. This gap causes two--step relaxation of this system.
Binary mixture of hard disks as a model glass former: Caging and uncaging
A multispeckle technique for efficiently measuring correctly ensemble-averaged intensity autocorrelation functions of scattered light from non-ergodic and/or non-stationary systems is described. The method employs a CCD camera as a multispeckle light detector and a computer-based correlator, and permits the simultaneous calculation of up to 500 correlation functions, where each correlation function is started at a different time. The correlation functions are calculated in real time and are referenced to a unique starting time. The multispeckle nature of the CCD camera detector means that a true ensemble average is calculated; no time averaging is necessary. The technique thus provides a "snapshot" of the dynamics, making it particularly useful for non-stationary systems where the dynamics are changing with time. Delay times spanning the range from 1 ms to 1000 s are readily achieved with this method. The technique is demonstrated in the multiple scattering limit where diffusing-wave spectroscopy theory applies. The technique can also be combined with a recently-developed two-cell technique that can measure faster decay times. The combined technique can measure delay times from 10 ns to 1000 s. The method is peculiarly well suited for studying aging processes in soft glassy materials, which exhibit both short and long relaxation times, non-ergodic dynamics, and slowly-evolving transient behavior.
Multispeckle diffusing-wave spectroscopy: a tool to study slow relaxation and time-dependent dynamics
This paper describes the dialog robot system designed by Team Irisapu for the preliminary round of the Dialogue Robot Competition 2022 (DRC2022). Our objective was to design a hospitable travel agent robot. The system we developed was ranked 8th out of 13 systems in the preliminary round of the competition, but our robot received high marks for its naturalness and likeability.Our next challenge is to create a system that can provide more useful information to users.
Hospitable Travel Agent Dialogue Robot: Team Irisapu Project Description for DRC2022
In this article we prove explicit formulae for the number of non-isomorphic cluster-tilted algebras of type \tilde{A}_n in the derived equivalence classes. In particular, we obtain the number of elements in the mutation classes of quivers of type \tilde{A}_n. As a by-product, this provides an alternative proof for the number of quivers of Dynkin type D_n which was first determined by Buan and Torkildsen.
Counting the number of elements in the mutation classes of \tilde{A}_n-quivers
We show that a coupling of non-colliding simple random walkers on the complete graph on $n$ vertices can include at most $n - \log n$ walkers. This improves the only previously known upper bound of $n-2$ due to Angel, Holroyd, Martin, Wilson, and Winkler ({\it Electron.~Commun.~Probab.~18}, 2013). The proof considers couplings of i.i.d.~sequences of Bernoulli random variables satisfying a similar avoidance property, for which there is separate interest. Our bound in this setting should be closer to optimal.
An upper bound on the size of avoidance couplings
The efficient and accurate simulation of material systems with defects using atomistic- to-continuum (a/c) coupling methods is a topic of considerable interest in the field of computational materials science. To achieve the desired balance between accuracy and computational efficiency, the use of a posteriori analysis and adaptive algorithms is critical. In this work, we present a rigorous a posteriori error analysis for three typical blended a/c coupling methods: the blended energy-based quasi-continuum (BQCE) method, the blended force-based quasi-continuum (BQCF) method, and the atomistic/continuum blending with ghost force correction (BGFC) method. We employ first and second-order finite element methods (and potentially higher-order methods) to discretize the Cauchy-Born model in the continuum region. The resulting error estimator provides both an upper bound on the true approximation error and a lower bound up to a theory-based truncation indicator, ensuring its reliability and efficiency. Moreover, we propose an a posteriori analysis for the energy error. We have designed and implemented a corresponding adaptive mesh refinement algorithm for two typical examples of crystalline defects. In both numerical experiments, we observe optimal convergence rates with respect to degrees of freedom when compared to a priori error estimates.
A Posteriori Analysis and Adaptive Algorithms for Blended Type Atomistic-to-Continuum Coupling with Higher-Order Finite Elements
Subsampling is effective in Knowledge Graph Embedding (KGE) for reducing overfitting caused by the sparsity in Knowledge Graph (KG) datasets. However, current subsampling approaches consider only frequencies of queries that consist of entities and their relations. Thus, the existing subsampling potentially underestimates the appearance probabilities of infrequent queries even if the frequencies of their entities or relations are high. To address this problem, we propose Model-based Subsampling (MBS) and Mixed Subsampling (MIX) to estimate their appearance probabilities through predictions of KGE models. Evaluation results on datasets FB15k-237, WN18RR, and YAGO3-10 showed that our proposed subsampling methods actually improved the KG completion performances for popular KGE models, RotatE, TransE, HAKE, ComplEx, and DistMult.
Model-based Subsampling for Knowledge Graph Completion
We study the generation of quantum entanglement between two giant atoms coupled to a one-dimensional waveguide. Since each giant atom interacts with the waveguide at two separate coupling points, there exist three different coupling configurations in the two-atom waveguide system: separated, braided, and nested couplings. Within the Wigner-Weisskopf framework for single coupling points, the quantum master equations governing the evolution of the two giant atoms are obtained. For each coupling configuration, the entanglement dynamics of the two giant atoms is studied, including the cases of two different atomic initial states: single- and double-excitation states. It is shown that the generated entanglement depends on the coupling configuration, phase shift, and atomic initial state. For the single-excitation initial state, there exists steady-state entanglement for these three couplings due to the appearance of the dark state. For the double-excitation initial state, an entanglement sudden birth is observed via adjusting the phase shift. In particular, the maximal entanglement for the nested coupling is about one order of magnitude larger than those of separate and braided couplings. In addition, the influence of the atomic frequency detuning on the entanglement generation is studied. This work can be utilized for the generation and control of atomic entanglement in quantum networks based on giant-atom waveguide-QED systems, which have wide potential applications in quantum information processing.
Generation of two-giant-atom entanglement in waveguide-QED systems
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches. However, these methods are unable to acquire new knowledge incrementally -- they are, in fact, mostly used only as a pre-training phase over IID data. In this work we investigate self-supervised methods in continual learning regimes without any replay mechanism. We show that naive functional regularization, also known as feature distillation, leads to lower plasticity and limits continual learning performance. Instead, we propose Projected Functional Regularization in which a separate temporal projection network ensures that the newly learned feature space preserves information of the previous one, while at the same time allowing for the learning of new features. This prevents forgetting while maintaining the plasticity of the learner. Comparison with other incremental learning approaches applied to self-supervision demonstrates that our method obtains competitive performance in different scenarios and on multiple datasets.
Continually Learning Self-Supervised Representations with Projected Functional Regularization
Following the observational evidence for cosmic acceleration which may exclude a possibility for the universe to recollapse to a second singularity, we review alternative scenarios of its future evolution. Although the de Sitter asymptotic state is still an option, some other asymptotic states which allow new types of singularities such as Big-Rip (due to a phantom matter) and sudden future singularities are also admissible and are reviewed in detail. The reality of these singularities which comes from the relation to observational characteristics of the universe expansion are also revealed and widely discussed.
Future state of the Universe
To study spacelike surfaces of codimension two in the Lorentz-Minkowski space $\Bbb R^{n+1}_1,$ we construct a pair of maps whose values are in $HS_r:=H_+^n(\textbf v,1)\cap \{x_{n+1}=r\},$ called $\textbf n_r^{\pm}$-Gauss maps. It is showed that they are well-defined and useful to study practically flat as well as umbilic spacelike surfaces of codimension two in $\Bbb R^{n+1}_1.$
$HS_{r}$-valued Gauss maps and umbilic spacelike surfaces of codimension two
Joret, Micek, Milans, Trotter, Walczak, and Wang recently asked if there exists a constant $d$ such that if $P$ is a poset with cover graph of $P$ of pathwidth at most $2$, then $\dim(P)\leq d$. We answer this question in the affirmative by showing that $d=17$ is sufficient. We also show that if $P$ is a poset containing the standard example $S_5$ as a subposet, then the cover graph of $P$ has treewidth at least $3$.
Posets with cover graph of pathwidth two have bounded dimension
We show how light can be controllably transported by light at microscale dimensions. We design a miniature device which consists of a short segment of an optical fiber coupled to transversely-oriented input-output microfibers. A whispering gallery soliton is launched from the first microfiber into the fiber segment and slowly propagates along its mm-scale length. The soliton loads and unloads optical pulses at designated input-output microfibers. The speed of the soliton and its propagation direction is controlled by the dramatically small, yet feasible to introduce permanently or all-optically, nanoscale variation of the effective fiber radius.
Controlled transportation of light by light at the microscale
Multi-agent control problems constitute an interesting area of application for deep reinforcement learning models with continuous action spaces. Such real-world applications, however, typically come with critical safety constraints that must not be violated. In order to ensure safety, we enhance the well-known multi-agent deep deterministic policy gradient (MADDPG) framework by adding a safety layer to the deep policy network. In particular, we extend the idea of linearizing the single-step transition dynamics, as was done for single-agent systems in Safe DDPG (Dalal et al., 2018), to multi-agent settings. We additionally propose to circumvent infeasibility problems in the action correction step using soft constraints (Kerrigan & Maciejowski, 2000). Results from the theory of exact penalty functions can be used to guarantee constraint satisfaction of the soft constraints under mild assumptions. We empirically find that the soft formulation achieves a dramatic decrease in constraint violations, making safety available even during the learning procedure.
Safe Deep Reinforcement Learning for Multi-Agent Systems with Continuous Action Spaces
We consider the problem of recovering a complete (i.e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb R^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse. This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers $\mathbf A_0$ when $\mathbf X_0$ has $O(n)$ nonzeros per column, under suitable probability model for $\mathbf X_0$. In contrast, prior results based on efficient algorithms provide recovery guarantees when $\mathbf X_0$ has only $O(n^{1-\delta})$ nonzeros per column for any constant $\delta \in (0, 1)$. Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. To show this apparently hard problem is tractable, we first provide a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no "spurious" local minima. This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one local minimizer with an arbitrary initialization, despite the presence of saddle points. The geometric approach we develop here may also shed light on other problems arising from nonconvex recovery of structured signals.
Complete Dictionary Recovery over the Sphere
For many spin-0 target nuclei neutron capture measurements yield information on level densities at the neutron separation energy. Also the average photon width has been determined from capture data as well as Maxwellian average cross sections for the energy range of unresolved resonances. Thus it is challenging to use this data set for a test of phenomenological prescriptions for the prediction of radiative processes. An important ingredient for respective calculations is the photon strength function for which a parameterization was proposed using a fit to giant dipole resonance shapes on the basis of theoretically determined ground state deformations including triaxiality. Deviations from spherical and axial symmetry also influence level densities and it is suggested to use a combined parameterization for both, level density and photon strength. The formulae presented give a good description of the data for low spin capture into 124 nuclei with 72<A<244 and only very few global parameters have to be adjusted when the predetermined information on ground state shapes of the nuclei involved is accounted for.
Impact of Triaxiality on the Emission and Absorption of Neutrons and Gamma Rays in Heavy Nuclei
The lists of facets -- $298,592$ in $86$ orbits -- and of extreme rays -- $242,695,427$ in $9,003$ orbits -- of the hypermetric cone $HYP_8$ are computed. The first generalization considered is the hypermetric polytope $HYPP_n$ for which we give general algorithms and a description for $n\le 8$. Then we shortly consider generalizations to simplices of volume higher than $1$, hypermetric on graphs and infinite dimensional hypermetrics.
The hypermetric cone on $8$ vertices and some generalizations
Many methods of sound event detection (SED) based on machine learning regard a segmented time frame as one data sample to model training. However, the sound durations of sound events vary greatly depending on the sound event class, e.g., the sound event ``fan'' has a long time duration, while the sound event ``mouse clicking'' is instantaneous. The difference in the time duration between sound event classes thus causes a serious data imbalance problem in SED. In this paper, we propose a method for SED using a duration robust loss function, which can focus model training on sound events of short duration. In the proposed method, we focus on a relationship between the duration of the sound event and the ease/difficulty of model training. In particular, many sound events of long duration (e.g., sound event ``fan'') are stationary sounds, which have less variation in their acoustic features and their model training is easy. Meanwhile, some sound events of short duration (e.g., sound event ``object impact'') have more than one audio pattern, such as attack, decay, and release parts. We thus apply a class-wise reweighting to the binary-cross entropy loss function depending on the ease/difficulty of model training. Evaluation experiments conducted using TUT Sound Events 2016/2017 and TUT Acoustic Scenes 2016 datasets show that the proposed method respectively improves the detection performance of sound events by 3.15 and 4.37 percentage points in macro- and micro-Fscores compared with a conventional method using the binary-cross entropy loss function.
Sound Event Detection Using Duration Robust Loss Function
The constraint reaction force of ideal nonholonomic constraints in time-dependent mechanics on a configuration bundle $Q\to R$ is obtained. Using the vertical extension of Hamiltonian formalism to the vertical tangent bundle $VQ$ of $Q\to R$, the Hamiltonian of a nonholonomic constrained system is constructed.
Nonholonomic Constraints in Time-Dependent Mechanics
We present a compact analytic formula for the two-loop six-particle MHV remainder function (equivalently, the two-loop light-like hexagon Wilson loop) in N = 4 supersymmetric Yang-Mills theory in terms of the classical polylogarithm functions Li_k with cross-ratios of momentum twistor invariants as their arguments. In deriving our result we rely on results from the theory of motives.
Classical Polylogarithms for Amplitudes and Wilson Loops
Most existing criteria derived from progenitor properties of core-collapse supernovae are not very accurate in predicting explosion outcomes. We present a novel look at identifying the explosion outcome of core-collapse supernovae using a machine learning approach. Informed by a sample of 100 2D axisymmetric supernova simulations evolved with Fornax, we train and evaluate a random forest classifier as an explosion predictor. Furthermore, we examine physics-based feature sets including the compactness parameter, the Ertl condition, and a newly developed set that characterizes the silicon/oxygen interface. With over 1500 supernovae progenitors from 9$-$27 M$_{\odot}$, we additionally train an auto-encoder to extract physics-agnostic features directly from the progenitor density profiles. We find that the density profiles alone contain meaningful information regarding their explodability. Both the silicon/oxygen and auto-encoder features predict explosion outcome with $\approx$90\% accuracy. In anticipation of much larger multi-dimensional simulation sets, we identify future directions in which machine learning applications will be useful beyond explosion outcome prediction.
Applications of Machine Learning to Predicting Core-collapse Supernova Explosion Outcomes
We present Deep-n-Cheap -- an open-source AutoML framework to search for deep learning models. This search includes both architecture and training hyperparameters, and supports convolutional neural networks and multi-layer perceptrons. Our framework is targeted for deployment on both benchmark and custom datasets, and as a result, offers a greater degree of search space customizability as compared to a more limited search over only pre-existing models from literature. We also introduce the technique of 'search transfer', which demonstrates the generalization capabilities of the models found by our framework to multiple datasets. Deep-n-Cheap includes a user-customizable complexity penalty which trades off performance with training time or number of parameters. Specifically, our framework results in models offering performance comparable to state-of-the-art while taking 1-2 orders of magnitude less time to train than models from other AutoML and model search frameworks. Additionally, this work investigates and develops various insights regarding the search process. In particular, we show the superiority of a greedy strategy and justify our choice of Bayesian optimization as the primary search methodology over random / grid search.
Deep-n-Cheap: An Automated Search Framework for Low Complexity Deep Learning
We show that the combination of two achiral components - atomic or molecular target plus a circularly polarized photon - can yield chirally structured photoelectron angular distributions. For photoionization of CO, the angular distribution of carbon K-shell photoelectrons is chiral when the molecular axis is neither perpendicular nor (anti-)parallel to the light propagation axis. In photo-double-ionization of He, the distribution of one electron is chiral, if the other electron is oriented like the molecular axis in the former case and if the electrons are distinguishable by their energy. In both scenarios, the circularly polarized photon defines a plane with a sense of rotation and an additional axis is defined by the CO molecule or one electron. This is sufficient to establish an unambiguous coordinate frame of well-defined handedness. To produce a chirally structured electron angular distribution, such a coordinate frame is necessary, but not sufficient. We show that additional electron-electron interaction or scattering processes are needed to create the chiral angular distribution.
Chiral photoelectron angular distributions from ionization of achiral atomic and molecular species
The asymptotic safety scenario in quantum gravity is reviewed, according to which a renormalizable quantum theory of the gravitational field is feasible which reconciles asymptotically safe couplings with unitarity. All presently known evidence is surveyed: (a) from the 2+\eps expansion, (b) from the perturbation theory of higher derivative gravity theories and a `large N' expansion in the number of matter fields, (c) from the 2-Killing vector reduction, and (d) from truncated flow equations for the effective average action. Special emphasis is given to the role of perturbation theory as a guide to `asymptotic safety'. Further it is argued that as a consequence of the scenario the selfinteractions appear two-dimensional in the extreme ultraviolet. Two appendices discuss the distinct roles of the ultraviolet renormalization in perturbation theory and in the flow equation formalism.
The Asymptotic Safety Scenario in Quantum Gravity -- An Introduction
In this paper, we study a nonlocal evolution system. We apply abstract results from the bifurcation theory to obtain the existence of coexistence states. Their stability are investigated as well.
Existence and Stability of Coexistence States for a Nonlocal Evolution System
We empirically characterize the performance of discriminative and generative LSTM models for text classification. We find that although RNN-based generative models are more powerful than their bag-of-words ancestors (e.g., they account for conditional dependencies across words in a document), they have higher asymptotic error rates than discriminatively trained RNN models. However we also find that generative models approach their asymptotic error rate more rapidly than their discriminative counterparts---the same pattern that Ng & Jordan (2001) proved holds for linear classification models that make more naive conditional independence assumptions. Building on this finding, we hypothesize that RNN-based generative classification models will be more robust to shifts in the data distribution. This hypothesis is confirmed in a series of experiments in zero-shot and continual learning settings that show that generative models substantially outperform discriminative models.
Generative and Discriminative Text Classification with Recurrent Neural Networks
The eikonal direct-reaction model, as used in spectroscopic studies of intermediate-energy nucleon-removal reactions on light target nuclei, is considered in the case of a proton target and applied to neutron removal from $^{29}$Ne at 240 MeV/nucleon. The computed cross sections and their sensitivities are compared using an earlier detailed analysis of carbon target data. The nuclear structure input, for the $^{29}$Ne ground-state and $^{28}$Ne final states, is that deduced from the carbon target analysis. The comparisons quantify the sensitivity of the two reactions to the angular momenta and binding energies of the active valence orbitals - showing the carbon target to be relatively more efficient for removals from weakly-bound, low-$\ell$, halo-like orbitals. Probing this sensitivity experimentally would provide useful tests of these predictions and of the model's description of the reaction mechanism.
Single-nucleon removal cross sections on nucleon and nuclear targets
Althogh KamLAND apparently rules out Resonant-Spin-Flavor-Precession (RSFP) as an explanation of the solar neutrino deficit, the solar neutrino fluxes in the Cl and Ga experiments appear to vary with solar rotation. Added to this evidence, summarized here, a power spectrum analysis of the Super-Kamiokande data reveals significant variation in the flux matching a dominant rotation rate observed in the solar magnetic field in the same time period. Three frequency peaks, all related to this rotation rate, can be explained quantitatively. A Super-Kamiokande paper reported no time variation of the flux, but showed the same peaks, there interpreted as statistically insignificant, due to an inappropriate analysis. This modulation is small (7%) in the Super-Kamiokande energy region (and below the sensitivity of the Super-Kamiokande analysis) and is consistent with RSFP as a subdominant neutrino process in the convection zone. The data display effects that correspond to solar-cycle changes in the magnetic field, typical of the convection zone. This subdominant process requires new physics: a large neutrino transition magnetic moment and a light sterile neutrino, since an effect of this amplitude occurring in the convection zone cannot be achieved with the three known neutrinos. It does, however, resolve current problems in providing fits to all experimental estimates of the mean neutrino flux, and is compatible with the extensive evidence for solar neutrino flux variability.
Evidence for Solar Neutrino Flux Variability and its Implications
Component separation for the Planck HFI data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low- resolution, estimation of the dust emission. In this paper we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7 and 7.2 per cent at the 1{\sigma} level across the full sky for thermal dust temperature, spectral index and optical depth at 353 GHz, respectively. Comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However outside of the Galactic plane premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.
Sparse estimation of model-based diffuse thermal dust emission