text
stringlengths
121
2.54k
summary
stringlengths
23
219
Many imaging problems, such as total variation reconstruction of X-ray computed tomography (CT) and positron-emission tomography (PET), are solved via a convex optimization problem with near-circulant, but not actually circulant, linear systems. The popular methods to solve these problems, alternating direction method of multipliers (ADMM) and primal-dual hybrid gradient (PDHG), do not directly utilize this structure. Consequently, ADMM requires a costly matrix inversion as a subroutine, and PDHG takes too many iterations to converge. In this paper, we present near-circulant splitting (NCS), a novel splitting method that leverages the near-circulant structure. We show that NCS can converge with an iteration count close to that of ADMM, while paying a computational cost per iteration close to that of PDHG. Through experiments on a CUDA GPU, we empirically validate the theory and demonstrate that NCS can effectively utilize the parallel computing capabilities of CUDA.
Splitting with Near-Circulant Linear Systems: Applications to Total Variation CT and PET
A brief history is given of the factor 2, starting in the most elementary considerations of geometry and kinematics of uniform acceleration, and moving to relativity, quantum mechanics and particle physics. The basic argument is that in all the significant cases in which the factor 2 or 1/2 occurs in fundamental physics, whether classical, quantum or relativistic, the same physical operation is taking place.
The factor 2 in fundamental physics
We present a systematic investigation of jet production at hadron colliders from a phenomenological point of view, with the dual aim of providing a validation of theoretical calculations and guidance to future determinations of parton distributions (PDFs). We account for all available inclusive jet and dijet production measurements from ATLAS and CMS at 7 and 8 TeV by including them in a global PDF determination, and comparing to theoretical predictions at NNLO QCD supplemented by electroweak (EW) corrections. We assess the compatibility of the PDFs, specifically the gluon, obtained before and after inclusion of the jet data. We compare the single-inclusive jet and dijet observables in terms of perturbative behaviour upon inclusion of QCD and EW corrections, impact on the PDFs, and global fit quality. In the single-inclusive case, we also investigate the role played by different scale choices and the stability of the results upon changes in modelling of the correlated experimental systematics.
Phenomenology of NNLO jet production at the LHC and its impact on parton distributions
In this article, we study the deformation theory of locally free sheaves and Hitchin pairs over a nodal curve. As a special case, the infinitesimal deformation of these objects gives the tangent space of the corresponding moduli spaces, which can be used to calculate the dimension of the corresponding moduli space. We show that the deformation of locally free sheaves and Hitchin pairs over a nodal curve is equivalent to the deformation of generalized parabolic bundles and generalized parabolic Hitchin pairs over the normalization of the nodal curve respectively.
Deformation of Locally Free Sheaves and Hitchin Pairs over Nodal Curve
On March 20, 2015, we obtained 159 spectra of the Sun as a star with the solar telescope and the FTS at the Institut f\"ur Astrophysik G\"ottingen, 76 spectra were taken during partial solar eclipse. We obtained RVs using $I_2$ as wavelength reference and determined the RM curve with a peak-to-peak amplitude of almost 1.4 km s$^{-1}$ at typical RV precision better than 1 m s$^{-1}$. We modeled disk-integrated solar RVs using surface velocities, limb darkening, and information about convective blueshift from 3D magneto-hydrodynamic simulations. We confirm that convective blueshift is crucial to understand solar RVs during eclipse. Our best model reproduced the observations to within a relative precision of 10% with residuals less than 30 m s$^{-1}$. We cross-checked parameterizations of velocity fields using a Dopplergram from the Solar Dynamics Observatory and conclude that disk-integration of the Dopplergram does not provide correct information about convective blueshift necessary for m s$^{-1}$ RV work. As main limitation for modeling RVs during eclipses, we identified limited knowledge about convective blueshift and line shape as functions of solar limb angle. We suspect that our model line profiles are too shallow at limb angles larger than $\mu = 0.6$ resulting in incorrect weighting of the velocities across the solar disk. Alternative explanations cannot be excluded like suppression of convection in magnetic areas and undiscovered systematics during eclipse observations. Accurate observations of solar line profiles across the solar disk are suggested. We publish our RVs taken during solar eclipse as a benchmark curve for codes calculating the RM effect and for models of solar surface velocities and line profiles.
Radial velocity observations of the 2015 Mar 20 eclipse - A benchmark Rossiter-McLaughlin curve with zero free parameters
We study the robust charging station location problem for a large-scale commercial taxi fleet. Vehicles within the fleet coordinate on charging operations but not on customer acquisition. We decide on a set of charging stations to open to ensure operational feasibility. To take this decision, we propose a novel solution method situated between the Location Routing Problems with Intraroute Facilities and Flow Refueling Location Problems. Additionally, we introduce a problem variant that makes a station sizing decision. Using our exact approach, charging stations for a robust operation of city-wide taxi fleets can be planned. We develop a deterministic core problem employing a cutting plane method for the strategic problem and a branch-and-price decomposition for the operational problem. We embed this problem into a robust solution framework based on adversarial sampling, which allows for planner-selectable risk tolerance. We solve instances derived from real-world data of the metropolitan area of Munich, containing 1,000 vehicles and 60 potential charging station locations. Our investigation of the sensitivity of technological developments shows that increasing battery capacities show a more favorable impact on vehicle feasibility of up to 10 percentage points compared to increasing charging speeds. Allowing for depot charging dominates both of these options. Finally, we show that allowing just 1% of operational infeasibility risk lowers infrastructure costs by 20%.
Robust Charging Network Planning for Metropolitan Taxi Fleets
We revisit the relationship between three classical measures of particle number, namely the chemical doping $x$, the Hall number $x_{hall}$ and the particle number inferred from the optical sum rule $x_{opt}$. We study the $t$-$t'$-$J$ model of correlations on a square lattice, as a minimal model for High $T_c$ systems, using numerical methods to evaluate the low temperature Kubo conductivites. These measures disagree significantly in this type of system, owing to Mott Hubbard correlations. The Hall constant has a complex behavior with several changes of sign as a function of filling $x$, depending upon the model parameters. Thus $x_{hall}$ depends sensitively on $t'$ and $J$, due to a kind of quantum interference.
The Hall Number, Optical Sum Rule and Carrier Density for the $t$-$t'$-$J$ model
In this paper, we propose an Integer Linear Model whose solutions are the aperiodic rhythms tiling with a given rhythm A. We show how this model can be used to efficiently check the necessity of the Coven-Meyerowitz's $(T2)$ condition and also to define an iterative algorithm that finds all the possible tilings of the rhythm A. To conclude, we run several experiments to validate the time efficiency of this model.
An Integer Linear Programming Model for Tilings
Organizations concerned about digital or computer forensics capability which establishes procedures and records to support a prosecution for computer crimes could benefit from implementing an ISO 27001: 2013-compliant (ISMS Information Security Management System). A certified ISMS adds credibility to information gathered in a digital forensics investigation; certification shows that the organization has an outsider which verifies that the correct procedures are in place and being followed. A certified ISMS is a valuable tool either when prosecuting an intruder or when a customer or other stakeholder seeks damages against the organization. SOC (Security Operation Center) as an organization or a security unit which handles a large volume of information requires a management complement, where ISMS would be a good choice. This idea will help finding solutions for problems related to digital forensics for non-cloud and cloud digital forensics, including Problems associated with the absence of standardization amongst different CSPs (Cloud service providers).
ISMS role in the improvement of digital forensics related process in SOC's
In recent years economics agents and systems have became more and more interacting and juxtaposed, therefore the social sciences need to rely on the studies of physical sciences to analyze this complexity in the relationships. According to this point of view we rely on the geometrical model of the M\"obius strip used in the electromagnetism which analyzes the moves of the electrons that produce energy. We use a similar model in a Corporate Social Responsibility context to devise a new cost function in order to take into account of three positive crossed effects on the efficiency: i)cooperation among stakeholders in the same sector, ii)cooperation among similar stakeholders in different sectors and iii)the stakeholders' loyalty towards the company. By applying this new cost function to a firm's decisional problem we find that investing in Corporate Social Responsibility activities is ever convenient depending on the number of sectors, the stakeholders' sensitivity to these investments and the decay rate to alienation. Our work suggests a new method of analysis which should be developed not only at a theoretical but also at an empirical level.
The Corporate Social Responsibility is just a twist in a M\"obius Strip
The aim of the KArlsruhe TRItium Neutrino experiment KATRIN is the determination of the absolute neutrino mass scale down to 0.2 eV, with essentially smaller model dependence than from cosmology and neutrinoless double beta decay. For this purpose, the integral electron energy spectrum is measured close to the endpoint of molecular tritium beta decay. The endpoint, together with the neutrino mass, should be fitted from the KATRIN data as a free parameter. The right-handed couplings change the electron energy spectrum close to the endpoint, therefore they have some effect also to the precise neutrino mass determination. The statistical calculations show that, using the endpoint as a free parameter, the unaccounted right-handed couplings constrained by many beta decay experiments can change the fitted neutrino mass value, relative to the true neutrino mass, by not larger than about 5-10 %. Using, incorrectly, the endpoint as a fixed input parameter, the above change of the neutrino mass can be much larger, order of 100 %, and for some cases it can happen that for large true neutrino mass value the fitted neutrino mass squared is negative. Publications using fixed endpoint and presenting large right-handed coupling effects to the neutrino mass determination are not relevant for the KATRIN experiment.
The KATRIN sensitivity to the neutrino mass and to right-handed currents in beta decay
The problem of localization on a geo-referenced satellite map given a query ground view image is useful yet remains challenging due to the drastic change in viewpoint. To this end, in this paper we work on the extension of our earlier work on the Cross-View Matching Network (CVM-Net) for the ground-to-aerial image matching task since the traditional image descriptors fail due to the drastic viewpoint change. In particular, we show more extensive experimental results and analyses of the network architecture on our CVM-Net. Furthermore, we propose a Markov localization framework that enforces the temporal consistency between image frames to enhance the geo-localization results in the case where a video stream of ground view images is available. Experimental results show that our proposed Markov localization framework can continuously localize the vehicle within a small error on our Singapore dataset.
Image-Based Geo-Localization Using Satellite Imagery
While two hidden Markov process (HMP) resp. quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equivalence of two HMP parametrizations $\cM_1,\cM_2$ resp. two QRW parametrizations $\cQ_1,\cQ_2$ in time $O(|\S|\max(N_1,N_2)^{4})$, where $N_1,N_2$ are the number of hidden states in $\cM_1,\cM_2$ resp. the dimension of the state spaces associated with $\cQ_1,\cQ_2$, and $\S$ is the set of output symbols. Previously available algorithms for testing equivalence of HMPs were exponential in the number of hidden states. In case of QRWs, algorithms for testing equivalence had not yet been presented. The core subroutines of this algorithm can also be used to efficiently test hidden Markov processes and quantum random walks for ergodicity.
Efficient tests for equivalence of hidden Markov processes and quantum random walks
Rumor source identification in large social networks has received significant attention lately. Most recent works deal with the scale of the problem by observing a subset of the nodes in the network, called sensors, to estimate the source. This paper addresses the problem of locating the source of a rumor in large social networks where some of these sensor nodes have failed. We estimate the missing information about the sensors using doubly non-negative (DN) matrix completion and compressed sensing techniques. This is then used to identify the actual source by using a maximum likelihood estimator we developed earlier, on a large data set from Sina Weibo. Results indicate that the estimation techniques result in almost as good a performance of the ML estimator as for the network for which complete information is available. To the best of our knowledge, this is the first research work on source identification with incomplete information in social networks.
Identification of Source of Rumors in Social Networks with Incomplete Information
The NEWS-G direct dark matter search experiment uses spherical proportional counters (SPC) with light noble gases to explore low WIMP masses. The first results obtained with an SPC prototype operated with Ne gas at the Laboratoire Souterrain de Modane (LSM) have already set competitive results for low-mass WIMPs. The forthcoming next phase of the experiment consists of a large 140 cm diameter SPC installed at SNOLAB with a new sensor design, with improved detector performance and data quality. Before its installation at SNOLAB, the detector was commissioned with pure methane gas at the LSM, with a temporary water shield, offering a hydrogen-rich target and reduced backgrounds. After giving an overview of the improvements of the detector, preliminary results of this campaign will be discussed, including UV laser and Ar-37 calibration data.
The search for Light Dark Matter with NEWS-G
In recent years, there has been a growing interest in the effects of data poisoning attacks on data-driven control methods. Poisoning attacks are well-known to the Machine Learning community, which, however, make use of assumptions, such as cross-sample independence, that in general do not hold for linear dynamical systems. Consequently, these systems require different attack and detection methods than those developed for supervised learning problems in the i.i.d.\ setting. Since most data-driven control algorithms make use of the least-squares estimator, we study how poisoning impacts the least-squares estimate through the lens of statistical testing, and question in what way data poisoning attacks can be detected. We establish under which conditions the set of models compatible with the data includes the true model of the system, and we analyze different poisoning strategies for the attacker. On the basis of the arguments hereby presented, we propose a stealthy data poisoning attack on the least-squares estimator that can escape classical statistical tests, and conclude by showing the efficiency of the proposed attack.
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical Systems
We consider the discrete memoryless symmetric primitive relay channel, where, a source $X$ wants to send information to a destination $Y$ with the help of a relay $Z$ and the relay can communicate to the destination via an error-free digital link of rate $R_0$, while $Y$ and $Z$ are conditionally independent and identically distributed given $X$. We develop two new upper bounds on the capacity of this channel that are tighter than existing bounds, including the celebrated cut-set bound. Our approach significantly deviates from the standard information-theoretic approach for proving upper bounds on the capacity of multi-user channels. We build on the blowing-up lemma to analyze the probabilistic geometric relations between the typical sets of the $n$-letter random variables associated with a reliable code for communicating over this channel. These relations translate to new entropy inequalities between the $n$-letter random variables involved. As an application of our bounds, we study an open question posed by (Cover, 1987), namely, what is the minimum needed $Z$-$Y$ link rate $R_0^*$ in order for the capacity of the relay channel to be equal to that of the broadcast cut. We consider the special case when the $X$-$Y$ and $X$-$Z$ links are both binary symmetric channels. Our tighter bounds on the capacity of the relay channel immediately translate to tighter lower bounds for $R_0^*$. More interestingly, we show that when $p\to 1/2$, $R_0^*\geq 0.1803$; even though the broadcast channel becomes completely noisy as $p\to 1/2$ and its capacity, and therefore the capacity of the relay channel, goes to zero, a strictly positive rate $R_0$ is required for the relay channel capacity to be equal to the broadcast bound.
Improving on the Cut-Set Bound via Geometric Analysis of Typical Sets
Vortex lines in superconductors in an external magnetic field slightly tilted from randomly-distributed parallel columnar defects can be modeled by a system of interacting bosons in a non-Hermitian vector potential and a random scalar potential. We develop a theory of the strongly-disordered non-Hermitian boson Hubbard model using the Hartree-Bogoliubov approximation and apply it to calculate the complex energy spectra, the vortex tilt angle and the tilt modulus of (1+1)-dimensional directed flux line systems. We construct the phase diagram associated with the flux-liquid to Bose-glass transition and find that, close to the phase boundary, the tilted flux liquid phase is characterized by a band of localized excitations, with two mobility edges in its low-energy spectrum.
Interaction effects in non-Hermitian models of vortex physics
In the present paper we prove a Stieltjes type theorem on the convergence of a sequence of rational functions associated with a mixed type Hermite-Pad\'e approximation problem of a Nikishin system of functions and analyze the ratio asymptotic of the corresponding Hermite-Pad\'e polynomials.
On the convergence of multi-level Hermite-Pad\'e approximants
For highly sensitive real-world predictive analytic applications such as healthcare and medicine, having good prediction accuracy alone is often not enough. These kinds of applications require a decision making process which uses uncertainty estimation as input whenever possible. Quality of uncertainty estimation is a subject of over or under confident prediction, which is often not addressed in many models. In this paper we show several extensions to the Gaussian Conditional Random Fields model, which aim to provide higher quality uncertainty estimation. These extensions are applied to the temporal disease graph built from the State Inpatient Database (SID) of California, acquired from the HCUP. Our experiments demonstrate benefits of using graph information in modeling temporal disease properties as well as improvements in uncertainty estimation provided by given extensions of the Gaussian Conditional Random Fields method.
Improving confidence while predicting trends in temporal disease networks
We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety. CAISAR provides a unified entry point for defining verification problems by using WhyML, the mature and expressive language of the Why3 verification platform. Moreover, CAISAR orchestrates and composes state-of-the-art machine learning verification tools which, individually, are not able to efficiently handle all problems but, collectively, can cover a growing number of properties. Our aim is to assist, on the one hand, the V\&V process by reducing the burden of choosing the methodology tailored to a given verification problem, and on the other hand the tools developers by factorizing useful features-visualization, report generation, property description-in one platform. CAISAR will soon be available at https://git.frama-c.com/pub/caisar.
CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness
Based on a clear ontology of material individuals, we analyze in detail the factual semantics of quantum theory, and argue that the basic mathematical formalism of quantum theory is just okay with (a certain form of ) realism and that it is perfectly applicable to quantum gravity. This is basically a process about 'cleansing' the formalism from semantic assumptions and physical referents that it doesn't really need (we use the term 'semantics' in the sense of the factual semantics of a physical theory, and not in the sense of model theory of abstract mathematics or logic). We base our study on the usual non-Boolean lattice of projectors in a Hilbert space and probability measures on it, to which we give a careful physical interpretation using the mentioned tools in order to avoid the usual problems posed by this task. At the end, we study a possible connection with the theory of quantum duration and time proposed in [arXiv:2012.03994, arXiv:2107.06693], for which this paper serves as a philosophical basis, and argue for our view that quantum gravity may show that what we perceive as change in the classical world was just (an ontologically fundamental) quantum collapse all along.
The Ontology and Semantics of Quantum Theory for Quantum Gravity
We use three different techniques to identify hundreds of white dwarf (WD) candidates in the Next Generation Virgo Cluster Survey (NGVS) based on photometry from the NGVS and GUViCS, and proper motions derived from the NGVS and the Sloan Digital Sky Survey (SDSS). Photometric distances for these candidates are calculated using theoretical color-absolute magnitude relations while effective temperatures are measured by fitting their spectral energy distributions. Disk and halo WD candidates are separated using a tangential velocity cut of 200 km~s$^{-1}$ in a reduced proper motion diagram, which leads to a sample of six halo WD candidates. Cooling ages, calculated for an assumed WD mass of 0.6$M_{\odot}$, range between 60 Myr and 6 Gyr, although these estimates depend sensitively on the adopted mass. Luminosity functions for the disk and halo subsamples are constructed and compared to previous results from the SDSS and SuperCOSMOS survey. We compute a number density of (2.81 $\pm$ 0.52) $\times 10^{-3}$~pc$^{-3}$ for the disk WD population--- consistent with previous measurements. We find (7.85 $\pm$ 4.55) $\times 10^{-6}$~pc$^{-3}$ for the halo, or 0.3\% of the disk. Observed stellar counts are also compared to predictions made by the TRILEGAL and Besan\c{c}on stellar population synthesis models. The comparison suggests that the TRILEGAL model overpredicts the total number of WDs. The WD counts predicted by the Besan\c{c}on model agree with the observations, although a discrepancy arises when comparing the predicted and observed halo WD populations; the difference is likely due to the WD masses in the adopted model halo.
The Next Generation Virgo Cluster Survey. XXVIII. Characterization of the Galactic White Dwarf Population
The evolution and long-term sustenance of cooperation has consistently piqued scholarly interest across the disciplines of evolutionary biology and social sciences. Previous theoretical and experimental studies on collective risk social dilemma games have revealed that the risk of collective failure will affect the evolution of cooperation. In the real world individuals usually adjust their decisions based on environmental factors such as risk intensity and cooperation level. However, it is still not well understood how such conditional behaviors affect the evolution of cooperation in repeated group interactions scenario from a theoretical perspective. Here, we construct an evolutionary game model with repeated interactions, in which defectors decide whether to cooperate in subsequent rounds of the game based on whether the risk exceeds their tolerance threshold and whether the number of cooperators exceeds the collective goal in the early rounds of the game. We find that the introduction of conditional cooperation strategy can effectively promote the emergence of cooperation, especially when the risk is low. In addition, the risk threshold significantly affects the evolutionary outcomes, with a high risk promoting the emergence of cooperation. Importantly, when the risk of failure to reach collective goals exceeds a certain threshold, the timely transition from a defective strategy to a cooperative strategy by conditional cooperators is beneficial for maintaining high-level cooperation.
Evolution of conditional cooperation in collective-risk social dilemma with repeated group interactions
Treatment decisions for brain metastatic disease rely on knowledge of the primary organ site, and currently made with biopsy and histology. Here we develop a novel deep learning approach for accurate non-invasive digital histology with whole-brain MRI data. Our IRB-approved single-site retrospective study was comprised of patients (n=1,399) referred for MRI treatment-planning and gamma knife radiosurgery over 21 years. Contrast-enhanced T1-weighted and T2-weighted Fluid-Attenuated Inversion Recovery brain MRI exams (n=1,582) were preprocessed and input to the proposed deep learning workflow for tumor segmentation, modality transfer, and primary site classification into one of five classes. Ten-fold cross-validation generated overall AUC of 0.878 (95%CI:0.873,0.883), lung class AUC of 0.889 (95%CI:0.883,0.895), breast class AUC of 0.873 (95%CI:0.860,0.886), melanoma class AUC of 0.852 (95%CI:0.842,0.862), renal class AUC of 0.830 (95%CI:0.809,0.851), and other class AUC of 0.822 (95%CI:0.805,0.839). These data establish that whole-brain imaging features are discriminative to allow accurate diagnosis of the primary organ site of malignancy. Our end-to-end deep radiomic approach has great potential for classifying metastatic tumor types from whole-brain MRI images. Further refinement may offer an invaluable clinical tool to expedite primary cancer site identification for precision treatment and improved outcomes.
A transformer-based deep learning approach for classifying brain metastases into primary organ sites using clinical whole brain MRI
Combining insights from machine learning and quantum Monte Carlo, the stochastic reconfiguration method with neural network Ansatz states is a promising new direction for high-precision ground state estimation of quantum many-body problems. Even though this method works well in practice, little is known about the learning dynamics. In this paper, we bring to light several hidden details of the algorithm by analyzing the learning landscape. In particular, the spectrum of the quantum Fisher matrix of complex restricted Boltzmann machine states exhibits a universal initial dynamics, but the converged spectrum can dramatically change across a phase transition. In contrast to the spectral properties of the quantum Fisher matrix, the actual weights of the network at convergence do not reveal much information about the system or the dynamics. Furthermore, we identify a new measure of correlation in the state by analyzing entanglement in eigenvectors. We show that, generically, the learning landscape modes with least entanglement have largest eigenvalue, suggesting that correlations are encoded in large flat valleys of the learning landscape, favoring stable representations of the ground state.
Geometry of learning neural quantum states
Currently, the superconducting diode effect (SDE) is actively discussed due to large application potential in superconducting electronics. In particular, the superconducting hybrid structures based on three-dimensional (3D) topological insulators are among the best candidates due to the strongest spin-orbit coupling (SOC). Most of the theoretical studies of the SDE focus either on full numerical calculation, which is often rather complicated or on the phenomenological approach. In the present paper we perform a comparison of the linearized and nonlinear microscopic approaches in the superconductor/ ferromagnet/ 3D topological insulator (S/F/TI) hybrid structure. Employing the quasiclassical Green's function formalism we solve the problem self-consistently. We show that the results obtained by the linearized approximation are not qualitatively different from the nonlinear solution. Main distinction in the results between the two methods is quantitative, i. e. they yield different supercurrent amplitudes. However, when calculating the so-called diode quality factor the quantitative difference is eliminated and both approaches can result in a good agreement.
Superconducting diode effect in topological hybrid structures
The value of the alpha spectroscopic factor (S_alpha) of the 6.356 MeV 1/2+ state of 17O is believed to have significant astrophysical implications due to the importance of the 13C(alpha,n)16O reaction as a possible source of neutron production for the s process. To further study this effect, an accurate measurement of the 13C(6Li,d)17O reaction at E_lab = 60 MeV has been performed recently by Kubono et al., who found a new value for the spectroscopic factor of the 6.356 MeV 1/2+ state of 17O based on a distorted wave Born approximation (DWBA) analysis of these data. This new value, S_alpha approximately = 0.011, is surprisingly much smaller than those used previously in astrophysical calculations (S_alpha approximately = 0.3-0.7) and thus poses a serious question as to the role of the 13C(alpha,n)16O reaction as a source of neutron production. In this work we perform a detailed analysis of the same 13C(6Li,d)17O data within the DWBA as well as the coupled reaction channel (CRC) formalism. Our analysis yields an S_alpha value of over an order of magnitude larger than that of Kubono et al. for the 6.356 MeV 1/2+ state of 17O.
DWBA analysis of the 13C(6Li,d)17O reaction at 10 MeV/nucleon and its astrophysical implications
We present a lattice simulation study of large $N_c$ regularities of meson and baryon spectroscopy in $SU(N_c)$ gauge theory with two flavors of dynamical fundamental representation fermions. Systems investigated include $N_c=2$, 3, 4, and 5, over a range of fermion masses parametrized by a squared pseudoscalar to vector meson mass ratio between about 0.2 to 0.7. Good agreement with large $N_c$ scaling is observed in the static potential, in meson masses and decay constants, and in baryon spectroscopy. This is an update of the published version of the paper (Phys. Rev. D94 (2016) 034506).
Lattice study of large $N_c$ QCD
In this work we investigate methods to improve the efficiency and scalability of quantum algorithms for quantum chemistry applications. We propose a transformation of the electronic structure Hamiltonian in the second quantization framework into the particle-hole (p/h) picture, which offers a better starting point for the expansion of the trial wavefunction. The state of the molecular system at study is parametrized in a way to efficiently explore the sector of the molecular Fock space that contains the desired solution. To this end, we explore several trial wavefunctions to identify the most efficient parameterization of the molecular ground state. Taking advantage of known post-Hartree Fock quantum chemistry approaches and heuristic Hilbert space search quantum algorithms, we propose a new family of quantum circuits based on exchange-type gates that enable accurate calculations while keeping the gate count (i.e., the circuit depth) low. The particle-hole implementation of the Unitary Coupled Cluster (UCC) method within the Variational Quantum Eigensolver approach gives rise to an efficient quantum algorithm, named q-UCC , with important advantages compared to the straightforward 'translation' of the classical Coupled Cluster counterpart. In particular, we show how a single Trotter step can accurately and efficiently reproduce the ground state energies of simple molecular systems.
Quantum algorithms for electronic structure calculations: particle/hole Hamiltonian and optimized wavefunction expansions
In this paper, we show that a generalized Sasakian space form of dimension greater than three is either of constant sectional curvature; or a canal hypersurface in Euclidean or Minkowski spaces; or locally a certain type of twisted product of a real line and a flat almost Hermitian manifold; or locally a wapred product of a real line and a generalized complex space form; or an $\alpha$-Sasakian space form; or it is of five dimension and admits an $\alpha$-Sasakian Einstein structure. In particular, a local classification for generalized Sasakian space forms of dimension greater than five is obtained. A local classification of Riemannian manifolds of quasi constant sectional curvature of dimension greater than three is also given in this paper.
Generalized Sasakian space forms and Riemannian manifolds of quasi constant sectional curvature
We consider the values at proper fractions of the arithmetic gamma function and the values at positive integers of the zeta function for F_q[theta] and provide complete algebraic independence results for them.
Algebraic independence of arithmetic gamma values and Carlitz zeta values
Switching between finitely many continuous-time autonomous steepest descent dynamics for convex functions is considered. Convergence of complete solutions to common minimizers of the convex functions, if such minimizers exist, is shown. The convex functions need not be smooth and may be subject to constraints. Since the common minimizers may represent consensus in a multi-agent system modeled by an undirected communication graph, several known results about asymptotic consensus are deduced as special cases. Extensions to time-varying convex functions and to dynamics given by set-valued mappings more general than subdifferentials of convex functions are included.
A unifying convex analysis and switching system approach to consensus with undirected communication graphs
The commonly used West and Yennie integral formula for the relative phase between the Coulomb and elastic hadronic amplitudes might be consistently applied to only if the hadronic amplitude had the constant ratio of the real to the imaginary parts al all kinematically allowed values of four momentum transfer squared.
Limited validity of West and Yennie integral formula for elastic scattering of hadrons
Uniform integer-valued Lipschitz functions on a domain of size $N$ of the triangular lattice are shown to have variations of order $\sqrt{\log N}$. The level lines of such functions form a loop $O(2)$ model on the edges of the hexagonal lattice with edge-weight one. An infinite-volume Gibbs measure for the loop O(2) model is constructed as a thermodynamic limit and is shown to be unique. It contains only finite loops and has properties indicative of scale-invariance: macroscopic loops appearing at every scale. The existence of the infinite-volume measure carries over to height functions pinned at the origin; the uniqueness of the Gibbs measure does not. The proof is based on a representation of the loop $O(2)$ model via a pair of spin configurations that are shown to satisfy the FKG inequality. We prove RSW-type estimates for a certain connectivity notion in the aforementioned spin model.
Uniform Lipschitz functions on the triangular lattice have logarithmic variations
We show that an event-by-event fluctuation of the ratio of neutral pions or resulting photons to charged pions can be used as an effective probe for the formation of disoriented chiral condensates. The fact that the neutral pion fraction produced in case of disoriented chiral condensate formation has a characteristic extended non gaussian shape, is shown to be the key factor which forms the basis of the present analysis.
A Fluctuation Probe of Disoriented Chiral Condensates
We study the regularity of the interface between the disjoint supports of a pair of nonnegative subharmonic functions. The portion of the interface where the Alt-Caffarelli-Friedman (ACF) monotonicity formula is asymptotically positive forms an $\mathcal{H}^{n-1}$-rectifiable set. Moreover, for $\mathcal{H}^{n-1}$-a.e. such point, the two functions have unique blowups, i.e. their Lipschitz rescalings converge in $W^{1,2}$ to a pair of nondegenerate truncated linear functions whose supports meet at the approximate tangent plane. The main tools used include the Naber-Valtorta framework and our recent result establishing a sharp quantitative remainder term in the ACF monotonicity formula. We also give applications of our results to free boundary problems.
Rectifiability and uniqueness of blow-ups for points with positive Alt-Caffarelli-Friedman limit
Given a smooth one parameter deformation of associative topological algebras, we define Getzler's Gauss-Manin connection on both the periodic cyclic homology and cohomology of the corresponding smooth field of algebras and investigate some basic properties. We use the Gauss-Manin connection to prove a rigidity result for periodic cyclic cohomology of Banach algebras with finite weak bidimension.
Smooth deformations and the Gauss-Manin connection
A search for charged leptons with large impact parameters using 139 fb$^{-1}$ of $\sqrt{s} = 13$ TeV $pp$ collision data from the ATLAS detector at the LHC is presented, addressing a long-standing gap in coverage of possible new physics signatures. Results are consistent with the background prediction. This search provides unique sensitivity to long-lived scalar supersymmetric lepton-partners (sleptons). For lifetimes of 0.1 ns, selectron, smuon and stau masses up to 720 GeV, 680 GeV, and 340 GeV are respectively excluded at 95% confidence level, drastically improving on the previous best limits from LEP.
Search for displaced leptons in $\sqrt{s} = 13$ TeV $pp$ collisions with the ATLAS detector
The Algebraic lambda-calculus and the Linear-Algebraic lambda-calculus extend the lambda-calculus with the possibility of making arbitrary linear combinations of terms. In this paper we provide a fine-grained, System F-like type system for the linear-algebraic lambda-calculus. We show that this "scalar" type system enjoys both the subject-reduction property and the strong-normalisation property, our main technical results. The latter yields a significant simplification of the linear-algebraic lambda-calculus itself, by removing the need for some restrictions in its reduction rules. But the more important, original feature of this scalar type system is that it keeps track of 'the amount of a type' that is present in each term. As an example of its use, we shown that it can serve as a guarantee that the normal form of a term is barycentric, i.e that its scalars are summing to one.
A System F accounting for scalars
The nonparametric estimation of the distribution of relaxation times approach is not as frequently used in the analysis of dispersed response of dielectric or conductive materials as are other immittance data analysis methods based on parametric curve fitting techniques. Nevertheless, such distributions can yield important information about the physical processes present in measured material. In this letter, we apply two quite different numerical inversion methods to estimate the distribution of relaxation times for glassy \lila\ dielectric frequency-response data at $225 \kelvin$. Both methods yield unique distributions that agree very closely with the actual exact one accurately calculated from the corrected bulk-dispersion Kohlrausch model established independently by means of parametric data fit using the corrected modulus formalism method. The obtained distributions are also greatly superior to those estimated using approximate functions equations given in the literature.
Comparison of methods for estimating continuous distributions of relaxation times
We investigate the steady-state R\'enyi entanglement entropies after a quench from a piecewise homogeneous initial state in integrable models. In the quench protocol two macroscopically different chains (leads) are joined together at the initial time, and the subsequent dynamics is studied. We study the entropies of a finite subsystem at the interface between the two leads. The density of R\'enyi entropies coincides with that of the entropies of the Generalized Gibbs Ensemble (GGE) that describes the interface between the chains. By combining the Generalized Hydrodynamics (GHD) treatment of the quench with the Bethe ansatz approach for the R\'enyi entropies, we provide exact results for quenches from several initial states in the anisotropic Heisenberg chain (XXZ chain), although the approach is applicable, in principle, to any low-entangled initial state and any integrable model. An interesting protocol that we consider is the expansion quench, in which one of the two leads is prepared in the vacuum of the model excitations. An intriguing feature is that for moderately large anisotropy the transport of bound-state is not allowed. Moreover, we show that there is a `critical' anisotropy, above which bound-state transport is permitted. This is reflected in the steady-state entropies, which for large enough anisotropy do not contain information about the bound states. Finally, we benchmark our results against time-dependent Density Matrix Renormalization Group (tDMRG) simulations.
Towards a Generalized Hydrodynamics description of R\'enyi entropies in integrable systems
We compute the hyperbolic covolume of the automorphism group of each even unimodular Lorentzian lattice. The result is obtained as a consequence of a previous work with Belolipetsky, which uses Prasad's volume to compute the volumes of the smallest hyperbolic arithmetic orbifolds.
Even unimodular Lorentzian lattices and hyperbolic volume
We use supernovae measurements, calibrated by the local determination of the Hubble constant $H_0$ by SH0ES, to interpolate the distance-redshift relation using Gaussian process regression. We then predict, independent of the cosmological model, the distances that are measured with strong lensing time delays. We find excellent agreement between these predictions and the measurements. The agreement holds when we consider only the redshift dependence of the distance-redshift relation, independent of the value of $H_0$. Our results disfavor the possibility that lens mass modeling contributes a 10\% bias or uncertainty in the strong lensing analysis, as suggested recently in the literature. In general our analysis strengthens the case that residual systematic errors in both measurements are below the level of the current discrepancy with the CMB determination of $H_0$, and supports the possibility of new physical phenomena on cosmological scales. With additional data our methodology can provide more stringent tests of unaccounted for systematics in the determinations of the distance-redshift relation in the late universe.
A model independent comparison of supernova and strong lensing cosmography: implications for the Hubble constant tension
The principles on which can be based computer model of process of training are formulated. Are considered: 1) the unicomponent model, which is recognizing that educational information consists of equal elements; 2) the multicomponent model, which is considering that knowledge is assimilate with a various strength, and on lesson weak knowledge becomes strong; 3) the generalized multicomponent model which considers change of working capacity of the pupil and various complexity of studied elements of a training material. Typical results of imitating modeling of learning process are presented in article.
Various models of process of the learning, based on the numerical solution of the differential equations
Despite the increasing popularity of commercial usage of UAVs or drone-delivered services, their dependence on the limited-capacity on-board batteries hinders their flight-time and mission continuity. As such, developing in-situ power transfer solutions for topping-up UAV batteries have the potential to extend their mission duration. In this paper, we study a scenario where UAVs are deployed as base stations (UAV-BS) providing wireless Hotspot services to the ground nodes, while harvesting wireless energy from flying energy sources. These energy sources are specialized UAVs (Charger or transmitter UAVs, tUAVs), equipped with wireless power transmitting devices such as RF antennae. tUAVs have the flexibility to adjust their flight path to maximize energy transfer. With the increasing number of UAV-BSs and environmental complexity, it is necessary to develop an intelligent trajectory selection procedure for tUAVs so as to optimize the energy transfer gain. In this paper, we model the trajectory optimization of tUAVs as a Markov Decision Process (MDP) problem and solve it using Q-Learning algorithm. Simulation results confirm that the Q-Learning based optimized trajectory of the tUAVs outperforms two benchmark strategies, namely random path planning and static hovering of the tUAVs.
Trajectory Optimization of Flying Energy Sources using Q-Learning to Recharge Hotspot UAVs
A $q$-Gaussian measure is a generalization of a Gaussian measure. This generalization is obtained by replacing the exponential function with the power function of exponent $1/(1-q)$ ($q\neq 1$). The limit case $q=1$ recovers a Gaussian measure. For $1\leq q <3$, the set of all $q$-Gaussian densities over the real line satisfies a certain regularity condition to define information geometric structures such as an entropy and a relative entropy via escort expectations. The ordinary expectation of a random variable is the integral of the random variable with respect to its law. Escort expectations admit us to replace the law to any other measures. A choice of escort expectations on the set of all $q$-Gaussian densities determines an entropy and a relative entropy. One of most important escort expectations on the set of all $q$-Gaussian densities is the $q$-escort expectation since this escort expectation determines the Tsallis entropy and the Tsallis relative entropy. The phenomenon gauge freedom of entropies is that different escort expectations determine the same entropy, but different relative entropies. In this note, we first introduce a refinement of the $q$-logarithmic function. Then we demonstrate the phenomenon on an open set of all $q$-Gaussian densities over the real line by using the refined $q$-logarithmic functions. We write down the corresponding Riemannian metric.
Gauge freedom of entropies on $q$-Gaussian measures
MediaSum, a large-scale media interview dataset consisting of 463.6K transcripts with abstractive summaries. To create this dataset, we collect interview transcripts from NPR and CNN and employ the overview and topic descriptions as summaries. Compared with existing public corpora for dialogue summarization, our dataset is an order of magnitude larger and contains complex multi-party conversations from multiple domains. We conduct statistical analysis to demonstrate the unique positional bias exhibited in the transcripts of televised and radioed interviews. We also show that MediaSum can be used in transfer learning to improve a model's performance on other dialogue summarization tasks.
MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization
Lippmann (or interferential) photography is the first and only analog photography method that can capture the full color spectrum of a scene in a single take. This technique, invented more than a hundred years ago, records the colors by creating interference patterns inside the photosensitive plate. Lippmann photography provides a great opportunity to demonstrate several fundamental concepts in signal processing. Conversely, a signal processing perspective enables us to shed new light on the technique. In our previous work, we analyzed the spectra of historical Lippmann plates using our own mathematical model. In this paper, we provide the derivation of this model and validate it experimentally. We highlight new behaviors whose explanations were ignored by physicists to date. In particular, we show that the spectra generated by Lippmann plates are in fact distorted versions of the original spectra. We also show that these distortions are influenced by the thickness of the plate and the reflection coefficient of the reflective medium used in the capture of the photographs. We verify our model with extensive experiments on our own Lippmann photographs.
Lippmann Photography: A Signal Processing Perspective
We give a direct proof of the fact that elliptic solutions of the associative Yang-Baxter equation arise from appropriate spherical orders on an elliptic curve.
On elliptic solutions of the associative Yang-Baxter equation
We consider a theory of modified gravity possessing d extra spatial dimensions with a maximally symmetric metric and a scale factor, whose (4+d)-dimensional gravitational action contains terms proportional to quadratic curvature scalars. Constructing the 4D effective field theory by dimensional reduction, we find that a special case of our action where the additional terms appear in the well-known Gauss-Bonnet combination is of special interest as it uniquely produces a Horndeski scalar-tensor theory in the 4D effective action. We further consider the possibility of achieving stabilised extra dimensions in this scenario, as a function of the number and curvature of extra dimensions, as well as the strength of the Gauss-Bonnet coupling. Further questions that remain to be answered such as the influence of matter-coupling are briefly discussed.
Einstein-Gauss-Bonnet gravity with extra dimensions
In this paper we study the complex simultaneous Waring rank for collections of monomials. For general collections we provide a lower bound, whereas for special collections we provide a formula for the simultaneous Waring rank. Our approach is algebraic and combinatorial. We give an application to ranks of binomials and maximal simultaneous ranks. Moreover, we include an appendix of scripts written in the algebra software Macaulay2 to experiment with simultaneous ranks.
A note on the simultaneous Waring rank of monomials
The classical Einstein-Hilbert (EH) action for general relativity (GR) is shown to be formally analogous to the classical system with position-dependent mass (PDM) models. The analogy is developed and used to build the covariant classical Hamiltonian as well as defining an alternative phase portrait for GR. The set of associated Hamilton's equations in the phase space is presented as a first-order system dual to the Einstein field equations. Following the principles of quantum mechanics, I build a canonical theory for the classical general. A fully consistent quantum Hamiltonian for GR is constructed based on adopting a high dimensional phase space. It is observed that the functional wave equation is timeless. As a direct application, I present an alternative wave equation for quantum cosmology. In comparison to the standard Arnowitt-Deser-Misner(ADM) decomposition and quantum gravity proposals, I extended my analysis beyond the covariant regime when the metric is decomposed into the $3+1$ dimensional ADM decomposition. I showed that an equal dimensional phase space can be obtained if one applies ADM decomposed metric.
Position-Dependent Mass Quantum systems and ADM formalism
The magnon dispersion in the charge, orbital and spin ordered phase in La(0.5)Sr(1.5)MnO(4) has been studied by means of inelastic neutron scattering. We find an excellent agreement with a magnetic interaction model basing on the CE-type superstructure. The magnetic excitations are dominated by ferromagnetic exchange parameters revealing a nearly-one dimensional character at high energies. The nearest neighbor ferromagnetic interaction in La(0.5)Sr(1.5)MnO(4) is significantly larger than the one in the metallic ferromagnetically ordered manganites. The large ferromagnetic interaction in the charge/orbital ordered phase appears to be essential for the capability of manganites to switch between metallic and insulating phases.
Spin-wave dispersion in orbitally ordered La(0.5)Sr(1.5)MnO(4)
The EDGES collaboration reported the finding of an unexpectedly deep absorption in the radio background at 78 MHz and interpreted the dip as a first detection of redshifted 21-cm from Cosmic Dawn. We have attempted an alternate analysis, adopting a maximally smooth function approach to model the foreground. A joint fit to the spectrum using such a function together with a flattened absorption profile yields a best fit absorption amplitude of $921 \pm 35$ mK. The depth of the 21-cm absorption inferred by the EDGES analysis required invoking non-standard cosmology or new physics or new sources at Cosmic Dawn and this tension with accepted models is compounded by our analysis that suggests absorption of greater depth. Alternatively, the measured spectrum may be equally-well fit assuming that there exists a residual unmodeled systematic sinusoidal feature and we explore this possibility further by examining for any additional 21-cm signal. The data then favors an absorption with Gaussian model parameters of amplitude $133 \pm 60$ mK, best width at half-power $9 \pm 3$ MHz and center frequency $72.5 \pm 0.8$ MHz. We also examine the consistency of the measured spectrum with plausible redshifted 21-cm models: a set of 3 of the 265 profiles in the global 21-cm atlas of Cohen et al. 2017 are favored by the spectrum. We conclude that the EDGES data may be consistent with standard cosmology and astrophysics, without invoking excess radio backgrounds or baryon-dark matter interactions.
The redshifted 21-cm signal in the EDGES low-band spectrum
We participated in three of the protein-protein interaction subtasks of the Second BioCreative Challenge: classification of abstracts relevant for protein-protein interaction (IAS), discovery of protein pairs (IPS) and text passages characterizing protein interaction (ISS) in full text documents. We approached the abstract classification task with a novel, lightweight linear model inspired by spam-detection techniques, as well as an uncertainty-based integration scheme. We also used a Support Vector Machine and the Singular Value Decomposition on the same features for comparison purposes. Our approach to the full text subtasks (protein pair and passage identification) includes a feature expansion method based on word-proximity networks. Our approach to the abstract classification task (IAS) was among the top submissions for this task in terms of the measures of performance used in the challenge evaluation (accuracy, F-score and AUC). We also report on a web-tool we produced using our approach: the Protein Interaction Abstract Relevance Evaluator (PIARE). Our approach to the full text tasks resulted in one of the highest recall rates as well as mean reciprocal rank of correct passages. Our approach to abstract classification shows that a simple linear model, using relatively few features, is capable of generalizing and uncovering the conceptual nature of protein-protein interaction from the bibliome. Since the novel approach is based on a very lightweight linear model, it can be easily ported and applied to similar problems. In full text problems, the expansion of word features with word-proximity networks is shown to be useful, though the need for some improvements is discussed.
Uncovering protein interaction in abstracts and text using a novel linear model and word proximity networks
Motivated by the recent discovery of superconductivity on the heterointerface $LaAlO_{3}/SrTiO_{3}$, we theoretically investigate its local electronic structures near an impurity considering the influence of Rashba-type spin-orbit interaction (RSOI) originated in the lack of inversion symmetry. We find that local density of states near an impurity exhibits the in-gap resonance peaks due to the quasiparticle scattering on the Fermi surface with the reversal sign of the pairing gap caused by the mixed singlet and RSOI-induced triplet superconducting state. We also analyze the evolutions of density of states and local density of states with the weight of triplet pairing component determined by the strength of RSOI, which will be widely observed in thin films of superconductors with surface or interface-induced RSOI, or various noncentrosymmetric superconductors in terms of point contact tunneling and scanning tunneling microscopy, and thus reveal an admixture of the spin singlet and RSOI-induced triplet superconducting states.
Local electronic structures on the superconducting interface $LaAlO_{3}/SrTiO_{3}$
This paper is concerned with Freeze LTL, a temporal logic on data words with registers. In a (multi-attributed) data word each position carries a letter from a finite alphabet and assigns a data value to a fixed, finite set of attributes. The satisfiability problem of Freeze LTL is undecidable if more than one register is available or tuples of data values can be stored and compared arbitrarily. Starting from the decidable one-register fragment we propose an extension that allows for specifying a dependency relation on attributes. This restricts in a flexible way how collections of attribute values can be stored and compared. This conceptual dimension is orthogonal to the number of registers or the available temporal operators. The extension is strict. Admitting arbitrary dependency relations satisfiability becomes undecidable. Tree-like relations, however, induce a family of decidable fragments escalating the ordinal-indexed hierarchy of fast-growing complexity classes, a recently introduced framework for non-primitive recursive complexities. This results in completeness for the class ${\bf F}_{\epsilon_0}$. We employ nested counter systems and show that they relate to the hierarchy in terms of the nesting depth.
On Freeze LTL with Ordered Attributes
Recently it has been realized that the production and decay processes of charginos, neutralinos, and sleptons receive corrections which grow like log m_squark for large m_squark. In this paper we calculate the chargino pair production cross section at e+e- colliders with quark/squark loop corrections. We introduce a novel formulation, where the one-loop amplitude is reorganized into two parts. One part is expressed in terms of the ``effective'' chargino coupling gbar and mixing matrices U^P, V^P, and includes all O(log m_squark) corrections, while the other decouples for large m_squark. The form of the one-loop cross section then becomes physically transparent. Our formulation can be easily extended to other loops and processes. Numerically, we find significant corrections due to the effective t-channel coupling gbar, for gaugino-like charginos. In the mixed region, where the chargino has large gaugino and Higgsino components, the corrections due to (U^P,V^P) are also significant. Our numerical results disagree with a previous calculation. We revisit previous studies of the determination of gbar through the measurement of the chargino production cross section. We point out that a previous study, which claimed that the measurement suffers large systematic errors, was performed at a ``pessimistic'' point in MSSM parameter space. We provide reasons why the systematic errors are not a limiting factor for generic parameter choices.
Radiative Corrections to a Supersymmetric Relation: A New Approach
The paper by Landau and Lifshitz on vortex sheets in rotating superfluid appeared in 1955 almost at the same time when Feynman published his paper on quantized vortices in superfluid 4He. For a long time this paper has been considered as an error. But 40 years later the vortex sheets have been detected in chiral superfluid 3He-A in the rotating cryostat constructed in the Olli Lounasmaa Low Temperature Laboratory (Otaniemi, Finland). The equation derived by Landau and Lifshits for the distance between the vortex sheets as a function of the angular velocity of rotation has been experimentally confirmed, which is the triumph of the theory. We discuss different configurations of the vortex sheets observed and to be observed in superfluid 3He-A.
Superfluids in rotation: Landau-Lifshitz vortex sheets vs Onsager-Feynman vortices
Search for double $\beta$ decay of $^{136}$Ce and $^{138}$Ce was realized with 732 g of deeply purified cerium oxide sample measured over 1900 h with the help of an ultra-low background HPGe $\gamma$ detector with a volume of 465 cm$^3$ at the STELLA facility of the Gran Sasso National Laboratories of the INFN (Italy). New improved half-life limits on double beta processes in the cerium isotopes were set at the level of $\lim T_{1/2}\sim 10^{17}-10^{18}$~yr; many of them are even two orders of magnitude larger than the best previous results.
Search for double beta decay of $^{136}$Ce and $^{138}$Ce with HPGe gamma detector
The intrinsic spin Hall conductivity and the anomalous Hall conductivity of ferromagnetic L1$_0$-CoPt are studied using first principle calculations of the spin Berry and Berry curvatures, respectively. We find that the Berry curvature and the spin Berry curvature exhibit different symmetry with respect to that of the band structure. The Berry curvature preserves the $C_{4v}$ crystal rotation symmetry along the c-axis whereas the symmetry of the spin Berry curvature reduces to $C_{2v}$. Contributions to the Berry curvature and the spin Berry curvature are classified by the spin character of bands crossing the Fermi level. We find that the reduced symmetry of the spin Berry curvature is due to band crossing points with opposite spin characters. From model Hamiltonian analyses, we show the universality of this distinct symmetry reduction of the spin Berry curvature with respect to the Berry curvature: it can be accounted for based on the form of spin current operator and velocity operator in the Kubo formula. Finally, we discuss the consequence of the reduced symmetry of the spin Berry curvature on the relationship between the anomalous Hall and spin Hall conductivity. When band crossing points with opposite spin characters are present in the reciprocal space, which is often the case, the anomalous Hall conductivity does not simply scale with the spin Hall conductivity with the scaling factor being the spin polarization at the Fermi level.
Symmetry of Berry and spin Berry curvatures in ferromagnetic CoPt
In the absence of direct observations of Europa's particle plumes, deposits left behind during eruptive events would provide the best evidence for recent geological activity, and would serve as indicators of the best places to search for ongoing activity on the icy moon. Here, we model the morphological and spectral signatures of europan plume deposits, utilizing constraints from recent Hubble Space Telescope observations as model inputs. We consider deposits emplaced by plumes that are 1 km to 300 km tall, and find that in the time between the Galileo Mission and the arrival of the Europa Clipper spacecraft, plumes that are < 7 km tall are most likely to emplace deposits that could be detected by spacecraft cameras. Deposits emplaced by larger plumes could be detected by cameras operating at visible wavelengths provided that their average particle size is sufficiently large, their porosity is high, and/or they are salt-rich. Conversely, deposits emplaced by large plumes could be easily detected by near-IR imagers regardless of porosity, or individual particle size or composition. If low-albedo deposits flanking lineated features on Europa are indeed cryoclastic mantlings, they were likely emplaced by plumes that were less than 4 km tall, and deposition could be ongoing today. Comparisons of the sizes and albedos of these deposits between the Galileo and Europa Clipper missions could shed light on the size and frequency of cryovolcanic eruptions on Europa.
Characterizing deposits emplaced by cryovolcanic plumes on Europa
Video question answering (Video QA) presents a powerful testbed for human-like intelligent behaviors. The task demands new capabilities to integrate video processing, language understanding, binding abstract linguistic concepts to concrete visual artifacts, and deliberative reasoning over spacetime. Neural networks offer a promising approach to reach this potential through learning from examples rather than handcrafting features and rules. However, neural networks are predominantly feature-based - they map data to unstructured vectorial representation and thus can fall into the trap of exploiting shortcuts through surface statistics instead of true systematic reasoning seen in symbolic systems. To tackle this issue, we advocate for object-centric representation as a basis for constructing spatio-temporal structures from videos, essentially bridging the semantic gap between low-level pattern recognition and high-level symbolic algebra. To this end, we propose a new query-guided representation framework to turn a video into an evolving relational graph of objects, whose features and interactions are dynamically and conditionally inferred. The object lives are then summarized into resumes, lending naturally for deliberative relational reasoning that produces an answer to the query. The framework is evaluated on major Video QA datasets, demonstrating clear benefits of the object-centric approach to video reasoning.
Object-Centric Representation Learning for Video Question Answering
The angular-dependent critical current density, Jc(theta), and the upper critical field, Hc2(theta), of epitaxial Ba(Fe1-xCox)2As2 thin films have been investigated. No Jc(theta) peaks for H || c were observed regardless of temperatures and magnetic fields. In contrast, Jc(theta) showed a broad maximum at theta=90 degree, which arises from intrinsic pinning. All data except at theta=90 degree can be scaled by the Blatter plot. Hc2(theta) near Tc follows the anisotropic Ginzburg-Landau expression. The mass anisotropy increased from 1.5 to 2 with increasing temperature, which is an evidence for multi-band superconductivity.
Scaling behaviour of the critical current in clean epitaxial Ba(Fe1-xCox)2As2 thin films
The Convolutional Neural Networks (CNN) have become very popular recently due to its outstanding performance in various computer vision applications. It is also used over widely studied face recognition problem. However, the existing layers of CNN are unable to cope with the problem of hard examples which generally produce lower class scores. Thus, the existing methods become biased towards the easy examples. In this paper, we resolve this problem by incorporating a Parametric Sigmoid Norm (PSN) layer just before the final fully-connected layer. We propose a PSNet CNN model by using the PSN layer. The PSN layer facilitates high gradient flow for harder examples as compared to easy examples. Thus, it forces the network to learn the visual characteristics of hard examples. We conduct the face recognition experiments to test the performance of PSN layer. The suitability of the PSN layer with different loss functions is also experimented. The widely used Labeled Faces in the Wild (LFW) and YouTube Faces (YTF) datasets are used in the experiments. The experimental results confirm the relevance of the proposed PSN layer.
PSNet: Parametric Sigmoid Norm Based CNN for Face Recognition
Given a database, the private information retrieval (PIR) protocol allows a user to make queries to several servers and retrieve a certain item of the database via the feedbacks, without revealing the privacy of the specific item to any single server. Classical models of PIR protocols require that each server stores a whole copy of the database. Recently new PIR models are proposed with coding techniques arising from distributed storage system. In these new models each server only stores a fraction $1/s$ of the whole database, where $s>1$ is a given rational number. PIR array codes are recently proposed by Fazeli, Vardy and Yaakobi to characterize the new models. Consider a PIR array code with $m$ servers and the $k$-PIR property (which indicates that these $m$ servers may emulate any efficient $k$-PIR protocol). The central problem is to design PIR array codes with optimal rate $k/m$. Our contribution to this problem is three-fold. First, for the case $1<s\le 2$, although PIR array codes with optimal rate have been constructed recently by Blackburn and Etzion, the number of servers in their construction is impractically large. We determine the minimum number of servers admitting the existence of a PIR array code with optimal rate for a certain range of parameters. Second, for the case $s>2$, we derive a new upper bound on the rate of a PIR array code. Finally, for the case $s>2$, we analyze a new construction by Blackburn and Etzion and show that its rate is better than all the other existing constructions.
On private information retrieval array codes
Consider the diagonal action of the special orthogonal group on the direct sum of a finite number of copies of the standard representation--the underlying field is assumed to be algebraically closed and of characteristic not equal to two. We construct a "standard monomial" basis for the ring of polynomial invariants for this action. We then deduce, by a deformation argument, our main result that this ring of polynomial invariants is Cohen-Macaulay. We give three applications of this result: (1) the first and second fundamental theorems of invariant theory for the above action; (2) Cohen-Macaulayness of the moduli space of equivalence classes of semi-stable vector bundles of rank two and degree zero on a smooth projective curve of genus at least three (for this application, characteristic three is also excluded); (3) a basis in terms of traces for the ring of polynomial invariants for the diagonal adjoint action of the special linear group SL(2) on a finite number of copies of its Lie algebra sl(2).
Standard monomial bases, moduli of vector bundles, and invariant theory
We analyse the current status of the dilaton domination scenario in the MSSM and its singlet extensions taking into account the measured value of the Higgs mass, the relic abundance of dark matter and constraints from SUSY searches at the LHC. We find that in the case of the MSSM the requirement of a dark matter relic abundance in accord with observation severely restricts the allowed parameter space, implying an upper bound on the superpartner masses which makes it fully testable at LHC-14. In singlet extensions with a large singlet-MSSM coupling $\lambda$ as favoured by naturalness arguments the coloured sparticles should again be within the reach of the LHC-14, while for small $\lambda$ it is possible to decouple the MSSM and singlet sectors, achieving the correct dark matter abundance with a singlino LSP while allowing for a heavy MSSM spectrum.
Dilaton domination in the MSSM and its singlet extensions
Building on our previously introduced Multi-cell Monte Carlo (MC)^2 method for modeling phase coexistence, this paper provides important improvements for efficient determination of phase equilibria in solids. The (MC)^2 method uses multiple cells, representing possible phases. Mass transfer between cells is modeled virtually by solving the mass balance equation after the composition of each cell is changed arbitrarily. However, searching for the minimum free energy during this process poses a practical problem. The solution to the mass balance equation is not unique away from equilibrium and consequently the algorithm is in risk of getting trapped in nonequilibrium solutions. Therefore, a proper stopping condition for (MC)^2 is currently lacking. In this work, we introduce a consistency check via a predictor-corrector algorithm to penalize solutions that do not satisfy a necessary condition for equivalence of chemical potentials and steer the system towards finding equilibrium. The most general acceptance criteria for (MC)^2 is derived starting from the isothermic-isobaric Gibbs Ensemble for mixtures. Using this ensemble, translational MC moves are added to include vibrational excitations as well as volume MC moves to ensure the condition of constant pressure and temperature entirely with a MC approach, without relying on any other method for relaxation of these degrees of freedom. As a proof of concept the method is applied to two binary alloys with miscibility gaps and a model quaternary alloy, using classical interatomic potentials.
Efficient determination of solid-state phase equilibrium with the Mutli-Cell Monte Carlo method
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. But just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein we carry out a large-scale investigation, specifically, one involving 26 ML algorithms, 250 datasets (regression and both binary and multinomial classification), 6 score metrics, and 28,857,600 algorithm runs. Analyzing the results we conclude that for many ML algorithms we should not expect considerable gains from hyperparameter tuning on average, however, there may be some datasets for which default hyperparameters perform poorly, this latter being truer for some algorithms than others. By defining a single hp_score value, which combines an algorithm's accumulated statistics, we are able to rank the 26 ML algorithms from those expected to gain the most from hyperparameter tuning to those expected to gain the least. We believe such a study may serve ML practitioners at large.
High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms
The max-algebraic core of a nonnegative matrix is the intersection of column spans of all max-algebraic matrix powers. Here we investigate the action of a matrix on its core. Being closely related to ultimate periodicity of matrix powers, this study leads us to new modifications and geometric characterizations of robust, orbit periodic and weakly stable matrices.
On the max-algebraic core of a nonnegative matrix
The Orthomin ( Omin ) and the Generalized Minimal Residual method ( GMRES ) are commonly used iterative methods for approximating the solution of non-symmetric linear systems. The s-step generalizations of these methods enhance their data locality parallel and properties by forming s simultaneous search direction vectors. Good data locality is the key in achieving near peak rates on memory hierarchical supercomputers. The theoretical derivation of the s-step Arnoldi and Omin has been published in the past. Here we derive the s-step GMRES method. We then implement s-step Omin and GMRES on a Cray-2 hierarchical memory supercomputer.
s-Step Orthomin and GMRES implemented on parallel computers
Broadband laser ultrasonics and two dimensional Fourier transformation are used to characterize the properties of varieties of foils and plates. Laser ultrasonics generation is achieved by use of a pulsed laser which deposits pulsed laser energy on the surface of the specimen. The displacement amplitude of the resulting broadband ultrasonic modes are monitored using a two wave mixing photo-refractive interferometer. By applying a two dimensional Fourier transformation to the detected spatial and temporal displacement waveforms, the images of density of state (DOS) for the excited ultrasounds are obtained. Results are presented for a 150 um thick paper sample, a 52.8 um stainless steel foil and a 1.27 mm thick aluminum plate. The DOS image demonstrates the ability to measure the properties of each generated ultrasonic modes and provides a direct, non destructive, measure of elastic moduli of the tested specimens
Non Destructive Determination Of Elastic Moduli By Two Dimensional Fourier Transformation And Laser Ultrasonic Technique
We investigate colour selection techniques for high redshift galaxies in the UKIDSS Ultra Deep Survey Early Data Release (UDS EDR). Combined with very deep Subaru optical photometry, the depth (K_AB = 22.5) and area (0.62 deg^2) of the UDS EDR allows us to investigate optical/near-IR selection using a large sample of over 30,000 objects. By using the B-z, z-K colour-colour diagram (the BzK technique) we identify over 7500 candidate galaxies at z > 1.4, which can be further separated into passive and starforming systems (pBzK and sBzK respectively). Our unique sample allows us to identify a new feature not previously seen in BzK diagrams, consistent with the passively evolving track of early type galaxies at z < 1.4. We also compare the BzK technique with the R-K colour selection of Extremely Red Objects (EROs) and the J-K selection of Distant Red Galaxies (DRGs), and quantify the overlap between these populations. We find that the majority of DRGs, at these relatively bright magnitudes are also EROs. Since previous studies have found that DRGs at these magnitudes have redshifts of z ~ 1 we determine that these DRG/ERO galaxies have SEDs consistent with being dusty star-forming galaxies or AGN at z < 2. Finally we observe a flattening in the number counts of pBzK galaxies, similar to other studies, which may indicate that we are sampling the luminosity function of passive z > 1 galaxies over a narrow redshift range.
The colour selection of distant galaxies in the UKIDSS Ultra-Deep Survey Early Data Release
We study the lattice agreement (LA) and atomic snapshot problems in asynchronous message-passing systems where up to $f$ nodes may crash. Our main result is a crash-tolerant atomic snapshot algorithm with \textit{amortized constant round complexity}. To the best of our knowledge, the best prior result is given by Delporte et al. [TPDS, 18] with amortized $O(n)$ complexity if there are more scans than updates. Our algorithm achieves amortized constant round if there are $\Omega(\sqrt{k})$ operations, where $k$ is the number of actual failures in an execution and is bounded by $f$. Moreover, when there is no failure, our algorithm has $O(1)$ round complexity unconditionally. To achieve amortized constant round complexity, we devise a simple \textit{early-stopping} lattice agreement algorithm and use it to "order" the update and scan operations for our snapshot object. Our LA algorithm has $O(\sqrt{k})$ round complexity. It is the first early-stopping LA algorithm in asynchronous systems.
Amortized Constant Round Atomic Snapshot in Message-Passing Systems
We describe the moduli spaces of theories with 32 or 16 supercharges, from several points of view. Included is a review of backgrounds with D-branes (including type I' vacua and F-theory), a discussion of holonomy of Riemannian metrics, and an introduction to the relevant portions of algebraic geometry. The case of K3 surfaces is treated in some detail.
TASI Lectures on Compatification and Duality
We investigate field-driven domain wall (DW) propagation in magnetic nanowires in the framework of the Landau-Lifshitz-Gilbert equation. We propose a new strategy to speed up the DW motion in a uniaxial magnetic nanowire by using an optimal space-dependent field pulse synchronized with the DW propagation. Depending on the damping parameter, the DW velocity can be increased by about two orders of magnitude compared the standard case of a static uniform field. Moreover, under the optimal field pulse, the change in total magnetic energy in the nanowire is proportional to the DW velocity, implying that rapid energy release is essential for fast DW propagation.
Fast domain wall propagation under an optimal field pulse in magnetic nanowires
We will review the main aspects of a mechanism for the contemporary generation of the baryon and Dark Matter abundances from the out-of-equilibrium decay of a Wimp-like mother particle and briefly discuss a concrete realization in a Supersymmetric scenario.
Dark Matter and Baryon Asymmetry production from out-of-equilibrium decays of Supersymmetric states
Continual Learning (CL) methods focus on accumulating knowledge over time while avoiding catastrophic forgetting. Recently, Wortsman et al. (2020) proposed a CL method, SupSup, which uses a randomly initialized, fixed base network (model) and finds a supermask for each new task that selectively keeps or removes each weight to produce a subnetwork. They prevent forgetting as the network weights are not being updated. Although there is no forgetting, the performance of SupSup is sub-optimal because fixed weights restrict its representational power. Furthermore, there is no accumulation or transfer of knowledge inside the model when new tasks are learned. Hence, we propose ExSSNeT (Exclusive Supermask SubNEtwork Training), that performs exclusive and non-overlapping subnetwork weight training. This avoids conflicting updates to the shared weights by subsequent tasks to improve performance while still preventing forgetting. Furthermore, we propose a novel KNN-based Knowledge Transfer (KKT) module that utilizes previously acquired knowledge to learn new tasks better and faster. We demonstrate that ExSSNeT outperforms strong previous methods on both NLP and Vision domains while preventing forgetting. Moreover, ExSSNeT is particularly advantageous for sparse masks that activate 2-10% of the model parameters, resulting in an average improvement of 8.3% over SupSup. Furthermore, ExSSNeT scales to a large number of tasks (100). Our code is available at https://github.com/prateeky2806/exessnet.
Exclusive Supermask Subnetwork Training for Continual Learning
We present a survey on generic singularities of geodesic flows in smooth signature changing metrics (often called pseudo-Riemannian) in dimension 2. Generically, a pseudo-Riemannian metric on a 2-manifold $S$ changes its signature (degenerates) along a curve $S_0$, which locally separates $S$ into a Riemannian ($R$) and a Lorentzian ($L$) domain. The geodesic flow does not have singularities over $R$ and $L$, and for any point $q \in R \cup L$ and every tangential direction $p$ there exists a unique geodesic passing through the point $q$ with the direction $p$. On the contrary, geodesics cannot pass through a point $q \in S_0$ in arbitrary tangential directions, but only in some admissible directions; the number of admissible directions is 1 or 2 or 3. We study this phenomenon and the local properties of geodesics near $q \in S_0$.
A brief survey on singularities of geodesic flows in smooth signature changing metrics on 2-surfaces
By propagating the many-body Schr\"odinger equation, we determine the exact time-dependent Kohn-Sham potential for a system of strongly correlated electrons which undergo field-induced tunneling. Numerous features are entirely absent from the approximations commonly used in time-dependent density-functional theory. The self-interaction correction is strong and time dependent, owing to electron localization, and prominent dynamic spatial potential steps arise from minima in the charge density, as modified by the Coulomb interaction experienced by the partially tunneled electron.
Exact time-dependent density-functional potentials for strongly correlated tunneling electrons
We compare four loop quantum gravity inspired black hole metrics near the Planck scale. Spin 0, 1/2, 1, and 2 field perturbations on these backgrounds are studied. The axial gravitational quasinormal modes are calculated and compared. The time evolution of the ringdown is examined. We also calculate the quasinormal modes in the eikonal limit and compare with the predictions from circular null geodesics.
Quasinormal modes of loop quantum black holes near the Planck scale
Muon lepton flavor processes are reviewed in connection with search for physics beyond the standard model. Several methods to distinguish different theoretical models are discussed for $\mu \to e \gamma$, $\mu \to 3 e$, and $\mu - e$ conversion processes. New calculation of the $\mu - e$ conversion rate is presented including a Higgs boson mediated effect in the supersymmetric seesaw model.
Searching for New Physics through LFV Processes
The magneto-crystalline anisotropy (MCA) of (Ga,Mn)As films has been studied on the basis of ab-initio electronic structure theory by performing magnetic torque calculations. An appreciable contribution to the in-plane uniaxial anisotropy can be attributed to an extended region adjacent to the surface. Calculations of the exchange tensor allow to ascribe a significant part to the MCA to the exchange anisotropy, caused either by a tetragonal distortion of the lattice or by the presence of the surface or interface.
Spin-orbit coupling effect in (Ga,Mn)As films: anisotropic exchange interactions and magnetocrystalline anisotropy
The defining feature of scalar sequestering is that the MSSM squark and slepton masses as well as all entries of the scalar Higgs mass matrix vanish at some high scale. This ultraviolet boundary condition - scalar masses vanish while gaugino and Higgsino masses are unsuppressed - is independent of the supersymmetry breaking mediation mechanism. It is the result of renormalization group scaling from approximately conformal strong dynamics in the hidden sector. We review the mechanism of scalar sequestering and prove that the same dynamics which suppresses scalar soft masses and the B_mu term also drives the Higgs soft masses to -|mu|^2. Thus the supersymmetric contribution to the Higgs mass matrix from the mu-term is exactly canceled by the soft masses. Scalar sequestering has two tell-tale predictions for the superpartner spectrum in addition to the usual gaugino mediation predictions: Higgsinos are much heavier (mu > TeV) than scalar Higgses (m_A ~ few hundred GeV), and third generation scalar masses are enhanced because of new positive contributions from Higgs loops.
Phenomenology of SUSY with scalar sequestering
Optimization of a Wireless Sensor Network (WSN) downlink with an energy harvesting transmitter (base station) is considered. The base station (BS), which is attached to the central controller of the network, sends control information to the gateways of individual WSNs in the downlink. This paper specifically addresses the case where the BS is supplied with solar energy. Leveraging the daily periodicity inherent in solar energy harvesting, the schedule for delivery of maintenance messages from the BS to the nodes of a distributed network is optimized. Differences in channel gain from the BS to sensor nodes make it a challenge to provide service to each of them while efficiently spending the harvested energy. Based on PTF (Power-Time-Fair), a close-to-optimal solution for fair allocation of harvested energy in a wireless downlink proposed in previous work, we develop an online algorithm, PTF-On, that operates two algorithms in tandem: A prediction algorithm based on a Kalman filter that operates on solar irradiation measurements, and a modified version of PTF. PTF-On can predict the energy arrival profile throughout the day and schedule transmission to nodes to maximize total throughput in a proportionally fair way.
Kalman Prediction Based Proportional Fair Resource Allocation for a Solar Powered Wireless Downlink
Single-molecule experiments have found near-perfect thermodynamic efficiency in the rotary motor F1-ATP synthase. To help elucidate the principles underlying nonequilibrium energetic efficiency in such stochastic machines, we investigate driving protocols that minimize dissipation near equilibrium in a simple model rotary mechanochemical motor, as determined by a generalized friction coefficient. Our simple model has a periodic friction coefficient that peaks near system energy barriers. This implies a minimum-dissipation protocol that proceeds rapidly when the system is overwhelmingly in a single macrostate, but slows significantly near energy barriers, thereby harnessing thermal fluctuations to kick the system over energy barriers with minimal work input. This model also manifests a phenomenon not seen in otherwise similar non-periodic systems: sufficiently fast protocols can effectively lap the system. While this leads to a non-trivial tradeoff between accuracy of driving and energetic cost, we find that our designed protocols out-perform naive protocols.
Optimal Control of Rotary Motors
We show that a (3+1)-dimensional system composed of an open magnetic vortex and an electrical point charge exhibits the phenomenon of Fermi-Bose transmutation. In order to provide the physical realization of this system we focus on the lattice compact scalar electrodynamics $SQED_c$ whose topological excitations are open Nielsen-Olesen strings with magnetic monopoles attached at their ends.
Fractional Statistics in Three Dimensions: Compact Maxwell-Higgs System
Aiming to improve the Automatic Speech Recognition (ASR) outputs with a post-processing step, ASR error correction (EC) techniques have been widely developed due to their efficiency in using parallel text data. Previous works mainly focus on using text or/ and speech data, which hinders the performance gain when not only text and speech information, but other modalities, such as visual information are critical for EC. The challenges are mainly two folds: one is that previous work fails to emphasize visual information, thus rare exploration has been studied. The other is that the community lacks a high-quality benchmark where visual information matters for the EC models. Therefore, this paper provides 1) simple yet effective methods, namely gated fusion and image captions as prompts to incorporate visual information to help EC; 2) large-scale benchmark datasets, namely Visual-ASR-EC, where each item in the training data consists of visual, speech, and text information, and the test data are carefully selected by human annotators to ensure that even humans could make mistakes when visual information is missing. Experimental results show that using captions as prompts could effectively use the visual information and surpass state-of-the-art methods by upto 1.2% in Word Error Rate(WER), which also indicates that visual information is critical in our proposed Visual-ASR-EC dataset
Visual Information Matters for ASR Error Correction
We explore the epoch dependence of number density and star-formation rate for submillimetre galaxies (SMGs) found at 850 um. The study uses a sample of 38 SMG in the GOODS-N field, for which cross-waveband identifications have been obtained for 35/38 members together with redshift measurements or estimates. A maximum-likelihood analysis is employed, along with the `single-source-survey' technique. We find a diminution in both space density and star formation rate at z > 3, closely mimicking the redshift cut-offs found for QSOs selected in different wavebands. The diminution in redshift is particularly marked, at a significance level too small to measure. The data further suggest, at a significance level of about 0.001, that two separately-evolving populations may be present, with distinct luminosity functions. These results parallel the different evolutionary behaviours of LIRGs and ULIRGs, and represent another manifestation of `cosmic down-sizing', suggesting that differential evolution extends to the most extreme star-forming galaxies.
The evolution of submillimetre galaxies: two populations and a redshift cut-off
The growing complexity of particle detectors makes their construction and quality control a new challenge. We present studies that explore the use of deep learning-based computer vision techniques to perform quality checks of detector components and assembly steps, which will automate procedures and minimize the need for human interventions. This study focuses on the construction steps of a silicon detector, which involve forming a mechanical structure with the sensor and wire bonding individual cells to electronics for reading out signals. Silicon detectors in high energy physics experiments today have millions of channels. Manual quality control of these and other high channel-density detectors requires enormous amounts of labor and can be prone to errors. Here, we explore computer vision applications to either augment or fully replace visual inspections done by humans. We investigated convolutional neural networks for image classification and autoencoders for anomalies detection. Two proof-of-concept studies will be presented.
Deep learning applications for quality control in particle detector construction
With the growth in social media, there is a huge amount of images of faces available on the internet. Often, people use other people's pictures on their own profile. Perceptual hashing is often used to detect whether two images are identical. Therefore, it can be used to detect whether people are misusing others' pictures. In perceptual hashing, a hash is calculated for a given image, and a new test image is mapped to one of the existing hashes if duplicate features are present. Therefore, it can be used as an image filter to flag banned image content or adversarial attacks --which are modifications that are made on purpose to deceive the filter-- even though the content might be changed to deceive the filters. For this reason, it is critical for perceptual hashing to be robust enough to take transformations such as resizing, cropping, and slight pixel modifications into account. In this paper, we would like to propose to experiment with effect of gaussian blurring in perceptual hashing for detecting misuse of personal images specifically for face images. We hypothesize that use of gaussian blurring on the image before calculating its hash will increase the accuracy of our filter that detects adversarial attacks which consist of image cropping, adding text annotation, and image rotation.
Towards Evaluating Gaussian Blurring in Perceptual Hashing as a Facial Image Filter
1973 Schinzel proved that the standard logarithmic height h on the maximal totally real field extension of the rationals is either zero or bounded from below by a positive constant. In this paper we study this property for canonical heights associated to rational functions and the corresponding dynamical system on the affine line. At the end, we will give a few remarks on the behavior of h on finite extensions of the maximal totally real field.
Heights and totally real numbers
Proof by induction plays a central role in formal verification. However, its automation remains as a formidable challenge in Computer Science. To solve inductive problems, human engineers often have to provide auxiliary lemmas manually. We automate this laborious process with template-based conjecturing, a novel approach to generate auxiliary lemmas and use them to prove final goals. Our evaluation shows that our working prototype, TBC, achieved 40 percentage point improvement of success rates for problems at intermediate difficulty level.
Template-Based Conjecturing for Automated Induction in Isabelle/HOL
I present a $q$-analog of the discrete Painlev\'e I equation, and a special realization of it in terms of $q$-orthogonal polynomials.
On a q-Deformation of the Discrete Painlev\'e I Equation and q-Orthogonal Polynomials
Immune cells learn about their antigenic targets using tactile sense: during recognition, a highly organized yet dynamic motif, named immunological synapse, forms between immune cells and antigen-presenting cells (APCs). Via synapses, immune cells selectively extract recognized antigen from APCs by applying mechanical pulling forces generated by the contractile cytoskeleton. Curiously, depending on its stage of development, a B lymphocyte exhibits distinct synaptic patterns and uses force at different strength and timing, which appear to strongly impact its capacity of distinguishing antigen affinities. However, the mechanism by which molecular binding affinity translates into the amount of antigen acquisition remains an unsolved puzzle. We use a statistical-mechanical model to study how the experimentally observed synaptic architectures can originate from normal cytoskeletal forces coupled to lateral organization of mobile receptors, and show how this active regulation scheme, collective in nature, may provide a robust grading scheme that allows efficient and broad affinity discrimination essential for proper immune function.
Active tuning of synaptic patterns enhances immune discrimination
Within the class of stochastic cellular automata models of traffic flows, we look at the velocity dependent randomization variant (VDR-TCA) whose parameters take on a specific set of extreme values. These initial conditions lead us to the discovery of the emergence of four distinct phases. Studying the transitions between these phases, allows us to establish a rigorous classification based on their tempo-spatial behavioral characteristics. As a result from the system's complex dynamics, its flow-density relation exhibits a non-concave region in which forward propagating density waves are encountered. All four phases furthermore share the common property that moving vehicles can never increase their speed once the system has settled into an equilibrium.
Non-concave fundamental diagrams and phase transitions in a stochastic traffic cellular automaton
It is argued that, for motion in a central force field, polar reciprocals of trajectories are an elegant alternative to hodographs. The principal advantage of polar reciprocals is that the transformation from a trajectory to its polar reciprocal is its own inverse. The form of polar reciprocals $k_*$ of Kepler problem orbits is established, and then the orbits $k$ themselves are shown to be conic sections using the fact that $k$ is the polar reciprocal of $k_*$. A geometrical construction is presented for the orbits of the Kepler problem starting from their polar reciprocals. No obscure knowledge of conics is required to demonstrate the validity of the method. Unlike a graphical procedure suggested by Feynman (and amended by Derbes), the algorithm based on polar reciprocals works without alteration for all three kinds of trajectories in the Kepler problem (elliptical, parabolic, and hyperbolic).
Orbits of the Kepler problem via polar reciprocals
de la Pe\~na 1980 and Puthoff 1987 show that circular orbits in the hydrogen problem of Stochastic Electrodynamics are stable. Though the Cole-Zou 2003 simulations support the stability, our recent numerics always lead to self-ionisation. Here the de la Pe\~na-Puthoff argument is extended to elliptic orbits. For very eccentric orbits with energy close to zero and angular momentum below some not-small value, there is on the average a net gain in energy for each revolution, which explains the self-ionisation. Next, an $1/r^2$ potential is added, which could stem from a dipolar deformation of the nuclear charge by the electron at its moving position. This shape retains the analytical solvability. When it is enough repulsive, the ground state of this modified hydrogen problem is predicted to be stable. The same conclusions hold for positronium.
On the stability of classical orbits of the hydrogen ground state in Stochastic Electrodynamics